An Overview of Containerization Technologies | The New Stack https://thenewstack.io/containers/ Tue, 12 Sep 2023 18:12:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 Harden Ubuntu Server to Secure Your Container and Other Deployments https://thenewstack.io/harden-ubuntu-server-to-secure-your-container-and-other-deployments/ Sat, 16 Sep 2023 13:00:48 +0000 https://thenewstack.io/?p=22717712

Ubuntu Server is one of the more popular operating systems used for container deployments. Many admins and DevOps team members

The post Harden Ubuntu Server to Secure Your Container and Other Deployments appeared first on The New Stack.

]]>

Ubuntu Server is one of the more popular operating systems used for container deployments. Many admins and DevOps team members assume if they focus all of their security efforts starting with the container image on up, everything is good to go.

However, if you neglect the operating system on which everything is installed and deployed, you are neglecting one of the most important (and easiest) steps to take.

In that vein, I want to walk you through a few critical tasks you can undertake with Ubuntu Server to make sure the foundation of your deployments is as secure as possible. You’ll be surprised at how easy this is.

Are you ready?

Let’s do this.

Schedule Regular Upgrades

I cannot tell you how many servers I’ve happened upon where the admin (or team of admins) failed to run regular upgrades. This should be an absolute no-brainer but I do understand the reasoning behind the failure to do this. First off, people get busy, so upgrades tend to fall by the wayside in lieu of putting out fires.

Second, when the kernel is upgraded, the server must be rebooted. Given how downtime is frowned upon, it’s understanding why some admins hesitate to run upgrades.

Don’t.

Upgrades are the only way to ensure your server is patched against the latest threats and if you don’t upgrade those servers are vulnerable.

Because of this, find a time when a reboot won’t interrupt service and apply the upgrades then.

Of course, you could also add Ubuntu Livepatch to the system, so patches are automatically downloaded, verified, and applied to the running kernel, without having to reboot.

Do Not Enable Root

Ubuntu ships with the root account disabled. In its place is sudo and I cannot recommend enough that you do not enable and use the root account. By enabling the root account, you open your system(s) up to security risks. You can even go so far as to disable root altogether, with the command:

sudo passwd -l root


What the above command does is expire the root password, so until you were to reset the root password, the root user is effectively inaccessible.

Disable SSH Login for the Root User

The next step you should take is to disable the root user SSH login. By default, Ubuntu Server enables root SSH login, which should be considered a security issue in the waiting. Fortunately, disabling root SSH access is very simple.

Log in to your Ubuntu Server and open the SSH daemon config file with:

sudo nano /etc/ssh/sshd_config


In that file, look for the line:

#PermitRootLogin prohibit-password


Change that to:

PermitRootLogin no


Save and close the file. Restart SSH with:

sudo systemctl restart sshd


The root user will no longer be allowed access via SSH.

Use SSH Key Authentication

Speaking of Secure Shell, you should always use key authentication, as it is much more secure than traditional password-based logins. This process takes a few steps and starts with you creating an SSH key pair on the system(s) that will be used to access the server. You’ll want to do this on any machine that will use SSH to remote into your server.

The first thing to do is generate an SSH key with the command:

ssh-keygen


Follow the prompts and SSH will generate a key pair and save it in ~/.ssh.

Next, copy that key to the server with the command:

ssh-copy-id SERVER


Where SERVER is the IP address of the remote server.

Once the key has been copied, make sure to attempt an SSH login from the local machine to verify it works.

Repeat the above steps on any machine that needs SSH access to the server because we’re not going to disable SSH password authentication. One thing to keep in mind is that, once you disable password authentication, you will only be able to access the server from a machine that has copied its SSH key to the server. Because of this, make sure you have local access to the server in question (just in case).

To disable SSH password authentication, open the SSH demon configuration file again and look for the following lines:

#PubkeyAuthentication yes


and

#PasswordAuthentication yes


Remove the # characters from both lines and change yes to no on the second. Once you’ve done that save and close the file. Restart SSH with:

sudo systemctl restart sshd


Your server will now only accept SSH connections using key authentication.

Install Fail2ban

Speaking of SSH logins, one of the first things you should do with Ubuntu Server is install fail2ban. This system keeps tabs on specific log files to detect unwanted SSH logins. When fail2ban detects an attempt to compromise your system via SSH, it automatically bans the offending IP address.

The fail2ban application can be installed from the standard repositories, using the command:

sudo apt-get install fail2ban -y


Once installed, you’ll need to configure an SSH jail. Create the jail file with:

sudo nano /etc/fail2ban/jail.local


In the file, paste the following contents:

[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 3


Restart fail2ban with:

sudo systemctl restart fail2ban


Now, anytime someone attempts to log into your Ubuntu server and fails 3 times, their IP address will be permanently blocked.

Secure Shared Memory

By default, shared memory is mounted as read/write. That means the /run/shm space can be exploited and any application or service that has access to /run/shm. To avoid this, you simply mount /run/shm with certain privileges.

The one caveat to this is you might run into certain applications or services that require read/write access to shared memory. Fortunately, most applications that require such access are GUIs, but that’s not an absolute. So if you find certain applications start behaving improperly, you’ll have to return read/write mounting to shared memory.

To do this, open /etc/fstab for editing with the command:

sudo nano /etc/fstab


At the bottom of the file, add the following line:

tmpfs /run/shm tmpfs defaults,noexec,nosuid 0 0


Save and close the file. Reboot the system with the command:

sudo reboot


Once the system reboots, shared memory is no longer mounted with read/write access.

Enable the Firewall

Uncomplicated Firewall (UFW) is disabled by default. This is not a good idea for production machines. Fortunately, UFW is incredibly easy to use and I highly recommend you enable it immediately.

To enable UFW, issue the command:

sudo ufw enable


The next command you’ll want to run is to allow SSH connections. That command is:

sudo ufw allow ssh


You can then allow other services, as needed, such as HTTP an HTTPS like so:

sudo ufw allow http
sudo ufw allow https


For more information on UFW, make sure to read the man page with the command:

man ufw

Final Thoughts

These are the first (and often most important) steps to hardening Ubuntu Server. You can also take this a bit further with password policies and two-factor authentication but the above steps will go a long way to giving you a solid base to build on.

The post Harden Ubuntu Server to Secure Your Container and Other Deployments appeared first on The New Stack.

]]>
Run GUI Applications as Containers with x11docker https://thenewstack.io/run-gui-applications-as-containers-with-x11docker/ Sat, 09 Sep 2023 13:00:26 +0000 https://thenewstack.io/?p=22717172

As a developer, you might have a need to work with GUI containers. If that’s the case, you’ll quickly find

The post Run GUI Applications as Containers with x11docker appeared first on The New Stack.

]]>

As a developer, you might have a need to work with GUI containers. If that’s the case, you’ll quickly find that the traditional Docker runtime engine doesn’t provide for running GUI applications (unless they are of the web-based type). When you want to develop a containerized GUI application, what do you do?

Fortunately, there are plenty of third-party applications that make it fairly easy to launch GUI containers on a desktop. As you might expect, this does require a desktop environment (otherwise, you’d be developing on a more traditional server-based setup). One such application is called x11docker. As the name implies, this application works with the Linux X display server (which means you’ll need a Linux distribution to make it work).

The x11docker application includes features like:

  • GPU hardware acceleration
  • Sound with PulseAudio or ALSA
  • Clipboard sharing
  • Printer and webcam access
  • Persistent home folder
  • Wayland support
  • Language locale creation
  • Several init systems and DBus within containers
  • Supports several container runtimes and backends (including podman)

You might be asking yourself, “Isn’t X11 insecure?” Yes, it is. Fortunately, x11docker avoids X server leaks by using multiple X servers. So you can use the tool without worrying you’ll be exposing yourself, your system, or your containers to the typical X11 server weaknesses.

One thing to keep in mind is that x11docker creates an unprivileged container user. That user’s password is x11docker and restricts the capabilities of the container. Because of this, some applications might not behave as expected. For example, when trying to run the Tor Browser from within a container, it cannot access /dev/stdout, which means the container will not run. That’s not the case with all containers. I’ll demonstrate with the VLC media player, which does work as expected.

I want to show you how to install x11docker on a running instance of a Ubuntu-based desktop operating system. Of course, the first thing you must do is install the Docker runtime engine. For that, I’ll show you two different methods.

Ready? Let’s get this done.

What You’ll Need

As I’ve already mentioned, you’ll need a running instance of a Ubuntu-based Linux desktop distribution. You’ll also need a user with sudo privileges. That’s it.

Installing Docker

First, we’ll go with the traditional method of installing the Docker runtime engine. The first thing to do is add the official Docker GPG to the system with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, we must add the Docker repository, so we can install the software. This is done with the command:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


With the repository added, we’ll then install a few dependencies using the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Update apt with:

sudo apt-get update


We can now install Docker with the command:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


To be able to run Docker command without sudo (which can be a security risk), add your user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in so the changes take effect.

If you’d rather do this the quick way, you can install Docker with the following commands:

sudo apt-get install curl wget uidmap -y
wget -qO- https://get.docker.com/ | sudo sh


To be able to run Docker rootless, issue the following command:

dockerd-rootless-setuptool.sh install

How to Install x11docker

Before we can install x11docker, we must install a few dependencies. This can be done with the command:

sudo apt-get install xpra xserver-xephyr xinit xauth xclip x11-xserver-utils x11-utils -y


Next, install x11docker with the command:

curl -fsSL https://raw.githubusercontent.com/mviereck/x11docker/master/x11docker | sudo bash -s -- --update


You can then update x11docker with the command:

sudo x11docker --update

How to Use x11docker

With x11docker installed, it’s time to test it out. Let’s test this with the VLC app container. First, pull the image with the command:

docker pull jess/vlc


Once the image has been pulled, run VLC (with the help of x11docker) with the command:

x11docker --pulseaudio --share=$HOME/Videos jess/vlc


You should see the VLC window open, ready to be used (Figure 1). It will be slightly slower than if the media was installed directly on your desktop but, otherwise, it should work as expected.

Figure 1: We’ve launched the VLC media player as a container.

Of course, that doesn’t help much if you’re a developer because you want to develop your own containers. You could always create the image you want to work with, tag it, push it to your repository of choice, pull it to your dev system with the docker pull command, and then deploy the container with x11docker.

And there you have it. You can now run GUI applications from within Docker containers, thanks to x11docker. Build on this by deploying your own, custom containers from your own images and see how it works.

The post Run GUI Applications as Containers with x11docker appeared first on The New Stack.

]]>
Dive: A Simple App for Viewing the Contents of a Docker Image https://thenewstack.io/dive-a-simple-app-for-viewing-the-contents-of-a-docker-image/ Sat, 02 Sep 2023 10:00:22 +0000 https://thenewstack.io/?p=22716587

Have you ever wanted to know the pieces that comprised a Docker image, without having to build a complete Software

The post Dive: A Simple App for Viewing the Contents of a Docker Image appeared first on The New Stack.

]]>

Have you ever wanted to know the pieces that comprised a Docker image, without having to build a complete Software Bill of Materials first? Maybe you not only want to view the contents but find ways of shrinking the size of those images?

To do that, you need to know things like layers, layer details, the contents of each layer and image details.

Sounds like hard work, doesn’t it?

With the help of an app called Dive, the process is actually quite simple.

Dive includes the following features:

  • Image content breakdown
  • Displays content detail of each layer
  • Displays the total size of the image being examined
  • Displays wasted space within the image (lower = better)
  • Displays the efficiency score for an image (higher = better)

That’s some fairly important information to have at your fingertips, especially for a developer trying to create Docker images that are as efficient and secure as possible. You certainly don’t want to include unnecessary applications in the layers of your images, and Dive is a great way to discern exactly what’s there.

Let’s get Dive installed.

What You’ll Need

Dive can be installed on Ubuntu, Red Hat Enterprise Linux and Arch-based distributions, as well as MacOS and Windows. I’m going to demonstrate the process on Ubuntu 22.04. If you use a different operating system, you’ll need to alter the installation process of both Docker and Dive. For MacOS, Dive can be installed with either Homebrew or MacPorts, and on Windows, Dive can be installed with a downloaded installer file for the OS.

Installing Docker

To examine an image with Dive, you must be able to first pull it with Docker (unless you plan on creating your own Docker images…which means you’ll need Docker installed anyway). Here’s how you can install the Docker runtime engine on Ubuntu 22.04.

First, you must download and install the official Docker GPG key (so you can install the software). To do this, log into your Ubuntu instance, open a terminal window and issue the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


With the GPG key added, it’s time to create the proper Docker repository, which can be done with the following command:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


Now that the repository is correctly added, we’ll install a few dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Before we can install Docker, we must now update apt with:

sudo apt-get update


Install Docker with the command:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


In order to allow your user to work with Docker (without having to employ sudo, which can be a security issue), you must add the user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in so the changes take effect.

Installing Dive

It’s now time to install Dive. On Ubuntu, this is also done from the command line. There are three commands to use.

The first command defines the latest dive version to an environment variable called DIVE_VERSION. That command is:

export DIVE_VERSION=$(curl -sL "https://api.github.com/repos/wagoodman/dive/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/')


Next, we download the latest version with the command:

curl -OL https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_amd64.deb


The above command will download a .deb file to the current working directory. You can then install Dive with:

sudo apt install ./dive_${DIVE_VERSION}_linux_amd64.deb


When the installation completes, you’re ready to test the application.

Using Dive

With both Dive and Docker installed, Dive is capable not only of diving into a container image but also of pulling the image as well.

Let’s say you want to examine the latest Alpine Docker image. The command for that would be:

dive alpine:latest


Once the images are pulled, Dive will display the contents of the images, showing each layer and the contents within (Figure 1).

 

Figure 1: The Dive tool showing the layers for the latest Alpine image.

Dive automatically pulls the image from Docker Hub. You can define a different source using the source option, like so:

dive IMAGE --source SOURCE


Where IMAGE is the name of the image you want to pull and SOURCE is the location of the repository.

To exit from Dive, use the [Ctr]+[C] keyboard combination.

Let’s say you want to examine the MongoDB image. Do that with the command:

dive mongo:latest


Given this is a far more complicated image, you’ll find multiple layers. You can navigate between the layers with your cursor keys. The currently selected layer will be indicated by a small purple square (Figure 2).

 

Figure 2: We’ve dived into the latest MongoDB image and have found multiple layers.

If you hit the Tab key, you’ll move the cursor to the right pane, where you can then use your cursor keys to navigate the layer hierarchy.

In the bottom left pane, you’ll see the wasted space and image efficiency information. If this is a custom image and those details aren’t satisfactory, you’ll need to do a bit of work on the image, rebuild it and dive back in.

And that’s pretty much the basics of using the Dive tool to view the contents of any Docker image. Hopefully, this command line application will help you build more efficient and secure images (or at least understand exactly what makes up the images you use).

The post Dive: A Simple App for Viewing the Contents of a Docker Image appeared first on The New Stack.

]]>
Kubernetes Isn’t Always the Right Choice https://thenewstack.io/kubernetes-isnt-always-the-right-choice/ Mon, 21 Aug 2023 14:02:42 +0000 https://thenewstack.io/?p=22716040

These days, you can encapsulate virtually any application in a container for execution. Containers solve a lot of problems, but

The post Kubernetes Isn’t Always the Right Choice appeared first on The New Stack.

]]>

These days, you can encapsulate virtually any application in a container for execution. Containers solve a lot of problems, but they introduce a new challenge of orchestration. Because of the growing need for container orchestration from a huge number of teams working to build cloud native applications, Kubernetes has gained significant popularity as a powerful tool to solve that challenge.

Building in a well-managed Kubernetes environment offers numerous benefits such as autoscaling, self-healing, service discovery and load balancing. However, embracing the world of Kubernetes often implies more than just adopting container orchestration technology. Teams need to strategically consider, “Is Kubernetes the right choice for my solution?” And they must do so by evaluating several components of this broader question.

Is My Team Composition a Fit for Kubernetes?

There’s no shortage of articles praising the capabilities of Kubernetes (K8s), and that’s not what we aim to dispute. K8s is the right choice in many cases. That said, direct interaction with and maintenance of K8s isn’t appropriate for all teams and projects.

  1. Small startups with cloud native applications: These teams will find direct management of Kubernetes to be a complex, time-consuming distraction from their goal of releasing and scaling a product. Given their size, the teams will not have the bandwidth to manage Kubernetes clusters while also developing their application.
  2. Enterprise teams with a variety of application types: For larger teams with specialist skills, Kubernetes is an excellent choice. However, fully managed container runtimes or Kubernetes as service offerings should still be considered. These services allow limited DevOps resources to focus on team productivity, developer self-service, cost management and other critical items.
  3. Midsize companies with a DevOps culture: While these teams are more prepared for a move to Kubernetes, it’s a major project that will disrupt existing workflows. Again, managed offerings unlock many benefits of Kubernetes without significant investment.
  4. Software consultancies: While these teams are adaptable, relying on Kubernetes can limit their ability to serve clients with different needs, as it pushes the consultancy toward recommending it even when it’s not the best fit.

How Complex Is My Project? Is K8s Overkill?

Rather than determining whether K8s meets some of your requirements, consider identifying specific characteristics and requirements that do not align well with capabilities of Kubernetes or introduce unnecessary complexity.

  1. Minimal scalability needs: If the project has consistently low traffic or predictable and steady resource demands without significant scaling requirements, Kubernetes will introduce unnecessary overhead. In these cases, managed container runtimes or virtual private server (VPS) solutions typically represent better value.
  2. Simple monolithic applications: If the project is a monolithic application with limited dependencies and doesn’t require independently scalable services or extremely high instance counts, Kubernetes is too complex for its needs.
  3. Static or limited infrastructure: If the project has small or static infrastructure without much variation in resource usage, then simpler deployment options such as managed services or VPS will suffice.
  4. Limited DevOps resources: Kubernetes requires expertise in container orchestration, which is not feasible for projects with limited DevOps resources or if the team is not willing to invest in learning Kubernetes. The benefits of containers can still be achieved without this additional investment.
  5. Prototyping and short-term projects: For projects with short development life cycles or limited production durations, the Kubernetes overhead cannot be justified.
  6. Project cost constraints: If the project has stringent budget constraints, the additional cost of setting up and maintaining a Kubernetes cluster will not be feasible. This is particularly true when considering the cost of the highly skilled team members required to do this work.
  7. Infrastructure requirements: Kubernetes can be resource-intensive, requiring robust infrastructure to run effectively. If your projects are small or medium-sized with modest resource requirements, using managed services or serverless is far more appropriate.

The complexity of your requirements alone won’t determine whether Kubernetes is perfect or excessive for your team; however, it can help you lean one way or the other. If you’re using Kubernetes directly, it won’t inherently elevate your product. Instead, its strength lies in crafting a resilient platform on which your product may thrive.

Pyramid

Image 1

The consequences are that the development efforts toward your product will shift further away from being the foundation of your business the more you commit to laying your own work underneath it.

This unearths the real question: Are we building a platform or are we trying to expedite our time to market with more immediate return on investment for our core business objectives?

Do We Have the Necessary Skill Set?

Kubernetes is often recognized for its challenging learning journey. What contributes to this complexity? To offer clarity, I’ve curated a list of topics based on specific criteria that help gauge the effort needed to improve one’s skills.

Complexity Description
Basic Fundamental, easier concepts
Intermediate Concepts needing some pre-existing knowledge
Advanced Complex concepts requiring extensive knowledge

Note: These complexity levels will vary based on individual background and prior experience.

Learning Area Description Complexity
Containerization Understanding of containers and tools like Docker. Basic
Kubernetes architecture Knowledge about pods, services, deployments, Replicasets, nodes and clusters. Intermediate
Kubernetes API and Objects Understanding the declarative approach of Kubernetes, using APIs and YAML. Intermediate
Networking Understanding of inter-pod communication, services, ingress, network policies and service mesh. Advanced
Storage Knowledge about volumes, persistent volumes (PV), persistent volume claims (PVC) and storage classes. Advanced
Security Understanding of Kubernetes security including RBAC, security contexts, network policies and pod security policies. Advanced
Observability Familiarity with monitoring, logging and tracing tools like Prometheus, Grafana, Fluentd, Jaeger. Intermediate
CI/CD in Kubernetes Integration of Kubernetes with CI/CD tools such as Jenkins, GitLab and use of Helm charts for deployment. Intermediate
Kubernetes best practices Familiarity with best practices and common pitfalls in the use of Kubernetes. Intermediate to Advanced

For teams that lack the necessary expertise or the time to learn, the overall development and deployment process can become overwhelming and slow, which will not be healthy for projects with tight timelines or small teams.

What Are the Cost Implications?

While Kubernetes itself is open source and free, running it is not. You’ll need to account for the expenses associated with the infrastructure, including the cost of servers, storage and networking as well as hidden costs.

The first hidden cost lies in its management and maintenance — the time and resources spent on training your team, troubleshooting, maintaining the system, maintaining internal workflows and self-service infrastructure.

For various reasons, the salaries of the highly skilled employees required for this work are overlooked by many when calculating the cost of a full-blown Kubernetes environment. Be wary of the many flawed comparisons between fully managed or serverless offerings against self-managed Kubernetes. They often fail to account for the cost of staff and the opportunity costs associated with lost time to Kubernetes.

The second hidden cost is tied to the Kubernetes ecosystem. Embracing the world of Kubernetes often implies more than just adopting a container orchestration platform. It’s like setting foot on a vast continent, rich in features and a whole universe of ancillary tools, services and products offered by various vendors, which ultimately introduce other costs.

Conclusion

A good tool is not about its hype or popularity but how well it solves your problems and fits into your ecosystem. In the landscape of cloud native applications, Kubernetes has understandably taken an oversized share of the conversation. However, I encourage teams to consider the trade-offs of different approaches made viable by solutions like OpenShift, Docker Swarm or serverless and managed services orchestrated by frameworks like Nitric.

In a follow-up post, I’ll explore an approach to creating cloud native apps without direct reliance on Kubernetes. I’ll dig into the process of building and deploying robust, scalable and resilient cloud native applications using infrastructure provisioned through managed services such as AWS Lambda, Google CloudRun and Azure ContainerApps.

This approach to developing applications for the cloud was the inspiration for Nitric, the cloud framework we are building that focuses on improving the experience for both developers and operations.

Nitric is an open source multilanguage framework for cloud native development designed to simplify the process of creating, deploying and managing applications in the cloud. It provides a consistent developer experience across multiple cloud platforms while abstracting and automating the complexities involved in configuring the underlying infrastructure.

For teams and projects that find direct interaction and management of Kubernetes unsuitable, whether due to budget constraints, limited resources or skill set, Nitric provides an avenue to harness the same advantages. Dive deeper into Nitric’s approach and share your feedback with us on GitHub.

The post Kubernetes Isn’t Always the Right Choice appeared first on The New Stack.

]]>
Monitor, Control and Debug Docker Containers with WhaleDeck https://thenewstack.io/monitor-control-and-debug-docker-containers-with-whaledeck/ Sat, 19 Aug 2023 14:00:21 +0000 https://thenewstack.io/?p=22716014

When you want to work with your Docker containers, do you opt to use the command line or do you

The post Monitor, Control and Debug Docker Containers with WhaleDeck appeared first on The New Stack.

]]>

When you want to work with your Docker containers, do you opt to use the command line or do you prefer to go the GUI route?

If the latter, you’ve probably found a mixture of tools that range from the overly complex to the vastly simplified. More than likely, you’d prefer something that exists somewhere in the middle, where form and function meet to create an app with just the right amount of features that make it easier for you to monitor, control, and debug those containers.

I’ve tried a wide variety of GUIs for Docker Containers and although I’m partial to Portainer, I understand that particular tool can be a bit much for some.

Fortunately, I’ve found a tool that makes working with Docker containers about as simple as possible. The app in question is called WhaleDeck and it’s only available for MacOS, iOS, and iPadOS devices. WhaleDeck easily connects to your Linux servers hosting Docker containers and simplifies a number of tasks associated with container management.

Before I continue, know this app does have its limitations. For example, you can’t build and deploy a container from WhaleDeck. But what you can do is:

  • Shutdown/restart the server
  • Start all containers
  • Stop all containers
  • Start, stop, and pause individual containers
  • Manage networks and volumes (Pro version only)
  • View resources (CPU, Memory, Uptime, Containers, Network, Drive), mounts, ports, and logs.

What I find most impressive about WhaleDeck is it gives you more information than you might need for a container. When viewing an individual container, you’ll see create, start, and finish dates, state restart policy, PID, platform, image, mounts, networks (IP gateway and IP), ports, and exposed ports. Besides building and launching containers, the only other feature missing from WhaleDeck is the ability to manage images.

As far as security is concerned, the passwords configured in WhaleDeck are safely stored in Apple’s iCloud Keychain, so only you can access them. On top of that, WhaleDeck does not track user statistics, so you don’t have to worry that the developers are keeping tabs on you.

Think of WhaleDeck as your Docker management console, where you can observe and manage your containers from your MacBook, iMac, iPhone, or iPad.

The basic WhaleDeck feature set can be used for free with two connected servers. If you need to connect WhaleDeck to more than two servers, you’ll have to pony up for the Pro version, which is only a one-time $19.99 cost. The Pro version not only gives you unlimited servers but also adds MacOS Server support, iCloud sync, and Custom settings.

Let’s get WhaleDeck installed and see how easy it is to connect it to your Docker server.

Installing WhaleDeck

I’m going to demonstrate the installation of WhaleDeck on MacOS (as I don’t have either an iOS or iPadOS device). If you’ve ever installed an application on MacOS, you know how simple it is and the installation of WhaleDeck is no different.

All you have to do is open the MacOS App Store and search for WhaleDeck. Once you see the entry (Figure 1 – by Florian Seida), click the Get button to install the app.

Figure 1: The Whaledeck entry in the MacOS App Store.

Once WhaleDeck is installed, you’ll find it in the MacOS Launchpad. Click the launcher to open the app. When WhaleDeck first opens, you’ll be greeted by an onboarding wizard that walks you through how the app is used (Figure 2).

Figure 2: The WhaleDeck onboarding feature makes it easy to learn about the features found in the app.

At the end of the onboarding wizard, you can also opt to test the Pro features (for 14 days).

Adding your First Server

After clicking through the onboarding wizard, you’ll land on the main page, which is fairly empty (Figure 3).

Figure 3: The main WhaleDeck is empty and waiting for you to connect to your first server.

To add your first server, click the + button in the upper right corner. In the resulting popup (Figure 4), fill out the required information.

Figure 4: The WhaleDeck server adds a popup.

You’ll need:

  • Alias – a nickname for your server
  • Host – IP address or domain for your Docker server
  • Port – This is the SSH port used on your hosting server
  • Username – a username that belongs to the docker group on your server
  • Password – the password for the user
  • Operating system – if you’re using the free version, you can only select Linux
  • Key – if you use SSH key authentication, you’ll need to add the key here

After filling out the necessary information, click Save server and WhaleDeck will go through the process of connecting to the server. If you wind up with an error, it could be that you’ve not accepted the SSH fingerprint from the server. Should that be the case, open the MacOS terminal app and SSH into your server. Accept the fingerprint and complete the login process. Once you’ve done that, you should be able to successfully save the server.

With the server added you can double-click the entry to expand it, where you can start managing your containers (Figure 5).

Figure 5: Your containers are now ready to be managed by WhaleDeck.

At this point, you can double-click on a container listing to open a window (Figure 6) that allows you to view various aspects of the container as well as stop/start/pause it.

Figure 6: A running container as viewed with the WhaleDeck app.

And that’s all there is to installing, connecting, and using the WhaleDeck Docker management app. Give the free version of this tool a try and see if it doesn’t make managing your containers a bit easier.

The post Monitor, Control and Debug Docker Containers with WhaleDeck appeared first on The New Stack.

]]>
Deploy Etherpad for an In-House Alternative to Google Docs https://thenewstack.io/deploy-etherpad-for-an-in-house-alternative-to-google-docs/ Sat, 12 Aug 2023 13:00:46 +0000 https://thenewstack.io/?p=22715456

If your developers (or any team in your business) need to make use of an in-house solution to house things

The post Deploy Etherpad for an In-House Alternative to Google Docs appeared first on The New Stack.

]]>

If your developers (or any team in your business) need to make use of an in-house solution to house things like documentation, code, YAML files, or just about anything, there are a vast array of options to choose from.

One such option is Etherpad, which is a web-based text editor that offers real-time collaboration, versioning, and formatting. Etherpad also supports plugins for things like alignment, headings, markdown, image uploads, comments, font colors, TOC, hyperlink embedding, spellcheck, and more. To find a complete list of plugins, check out the official listing here.

Etherpad is also available in 105 languages and is used by millions around the globe.

With this tool, your teams could easily collaborate on many different types of documents, without having to rely on a third-party service.

I’m going to demonstrate deploying Etherpad by way of Docker on Ubuntu Server 22.04. You can deploy Etherpad in-house or even on your third-party cloud host.

What You’ll Need

The only things you’ll need for this are a running instance of Ubuntu Server and a user with sudo privileges. Because we’re deploying this as an in-house service, it doesn’t require a domain name. If, however, you want to access Etherpad from outside your LAN, you’ll have to take the extra steps to configure the server for a domain and make sure your network hardware is configured to route traffic to the Etherpad server.

Installing Docker

Because we’re going to deploy Etherpad as a container, you’ll want to have the Docker runtime engine installed on your machine, which means you can deploy on any platform that supports Docker. Of course, being that Ubuntu Server is my go-to platform of choice, I’ll demonstrate on that OS.

Log into your instance of Ubuntu Server. If your server has a GUI, open a terminal window. Once logged in, the first thing to do is add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the official Docker repository with:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


Install the necessary dependencies with:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release git -y


Update apt with the command:

sudo apt-get update


You can now install the latest version of the Docker engine with the command:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your to the docker group (so you can work with Docker without having to use sudo…which can be a security risk) with the command:

sudo usermod -aG docker $USER


Log out and log back in, so the changes take effect.

With Docker installed, you’re ready to deploy Etherpad.

Deploying Etherpad with Docker

It’s now time to deploy Etherpad. First, let’s pull the latest official image with the command:

docker pull etherpad/etherpad


After the image pulls, you can deploy the container with the command:

docker run --detach --publish 9001:9001 etherpad/etherpad


Give the container a minute to spin up. You can check its status with the command:

docker ps -a | grep etherpad


The output should include healthy, which indicates the container is up and running and ready for connections.

Accessing Etherpad

At this point, Etherpad is ready. Point a web browser (that’s connected to the same network hosting Etherpad) to http://SERVER:9001 (where SERVER is the IP address of the hosting server. You’ll be greeted by the New Pad create button (Figure 1).

Figure 1: A fresh Etherpad instance, ready for collaboration.

You can either click New Pad or type a name for a new pad and click OK. Since this is your first time working with Etherpad, there won’t be any Pads to work with. Once Etherpad loads, you’ll see a quick note on the first pad.

The Full Install Route

One thing to keep in mind, however, is that the Docker version of Etherpad uses DirtyDB, which isn’t generally recommended for production. You can suppress this warning (as instructed by the initial Pad) or you can skip the Docker deployment and do a full installation. Here are the steps for installing Etherpad on Ubuntu Server 22.04.

  1. If you already deployed the container, stop it with docker stop ID (where ID is the ID of the Etherpad container, which can be found with docker ps -a |grep etherpad).
  2. Add the Nodejs repository with curl -fsSL https://deb.nodesource.com/setup_current.x | sudo -E bash –
  3. Install Nodejs with sudo apt-get install nodejs -y
  4. Install MariaDB server with sudo apt-get ijnstall mariadb-server -y
  5. Log into the MariaDB console with sudo mysql
  6. Create the database with CREATE DATABASE etherpad_db;
  7. Create a user with CREATE USER etherpaduser@localhost IDENTIFIED By ‘PASSWORD’; (Where PASSWORD is a strong password).
  8. Grant the necessary privileges with GRANT ALL PRIVILEGES ON etherpad_db.* TO etherpaduser@localhost;
  9. Flush the privilege table with FLUSH PRIVILEGES;
  10. Exit the MariaDB console with EXIT
  11. Add a dedicated user with sudo adduser ether
  12. Change to the new user with su ether
  13. Clone the Etherpad source with git clone –branch master https://github.com/ether/etherpad-lite.git
  14. Change into the newly-created directory with cd etherpad-lite
  15. Set a necessary environment variable with export NODE_ENV=production
  16. Install Etherpad with src/bin/run.sh
  17. Open the firewall with sudo ufw allow 9001.

The above steps will install and start Etherpad but won’t return your bash prompt. There’s still some work to do. Cancel the running service with the [Ctrl]+[C] keyboard shortcut. Open the Etherpad settings file with the command:

nano etherpad-lite/settings.json

Change this section:

"dbType" : "dirty",
"dbSettings" : {
"filename" : "var/dirty.db"
},


To this:

/*
"dbType" : "dirty",
"dbSettings" : {
"filename" : "var/dirty.db"
},
*/


Next, locate the “dbType“: “mysql“, section and remove the /* and */ lines. In that same section, change the user option to etherpaduser and the password option to the password you set for the etherpad database user and change the database entry to etherpad_db. Next, locate the trustProxy line and change it from false to true.

In the same file, locate the requireAuthentication variable and change it from false to true.

Finally, locate the user section (it starts with “users”: {) and remove the /* and */ lines. In that same section, change the password (which defaults to changeme1) to something strong and unique.

Save and close the file.

Install the necessary dependencies by issuing the command:

./bin/installDeps.sh


When that completes, exit from the current user with the exit command.

Next, we must create a systemd service file with the command:

sudo nano /etc/systemd/system/etherpad.service


Paste the following lines into the file:

[Unit]
Description=Etherpad-lite, the collaborative editor.
After=syslog.target network.target

[Service]
Type=simple
User=ether
Group=ether
WorkingDirectory=/etherpad-lite
Environment=NODE_ENV=production
ExecStart=/usr/bin/node /etherpad-lite/src/node/server.js
Restart=always

[Install]
WantedBy=multi-user.target


Save and close the file.

Reload the systemd daemon with the command:

sudo systemctl daemon-reload


Start and enable the service with:

sudo systemctl enable --now etherpad


You should now be able to access Etherpad in the same way as you did after deploying it with Docker.

Either way, your teams will enjoy an in-house, real-time collaboration tool where they can work together on documentation, code, YAML files, or just anything that can be created within a text editor.

The post Deploy Etherpad for an In-House Alternative to Google Docs appeared first on The New Stack.

]]>
Unleashing the Power of Kubernetes Application Mobility https://thenewstack.io/unleashing-the-power-of-kubernetes-application-mobility/ Thu, 10 Aug 2023 12:00:29 +0000 https://thenewstack.io/?p=22715233

In a previous article for The New Stack, I discussed the challenges and benefits of cloud native application portability. Portable

The post Unleashing the Power of Kubernetes Application Mobility appeared first on The New Stack.

]]>

In a previous article for The New Stack, I discussed the challenges and benefits of cloud native application portability.

Portable applications are good for hot backups, multicloud load balancing, deploying applications to new environments and switching from one cloud to another for business reasons.

However, such portability is difficult, because Kubernetes applications consist of ephemeral microservices, configurations and data. Kubernetes also handles state information in an abstracted way, since microservices are generally stateless.

It is therefore important to understand the value of Kubernetes application mobility. At first glance, application mobility appears to be synonymous with application portability, especially in the Kubernetes context.

If we look more closely, however, there is an important distinction, a distinction that clarifies how organizations can extract the most value out of this important feature.

Application Migration, Portability and Mobility: A Primer

Application migration, portability and mobility are similar but distinct concepts. Here are the differences.

  • Application migration means moving either source code or application binaries from one environment to another, for example, from a virtual machine instance to one or more containers.
  • Cloud native application portability centers on moving microservices-based workloads running on different instances of Kubernetes.
  • Cloud native application mobility, the focus of this article, means ensuring that the consuming applications that interact with microservices work seamlessly regardless of the locations of the underlying software, even as workloads move from one environment to another.

Application portability supports application mobility but is neither necessary nor sufficient for it.

There are many benefits of application mobility, including cloud-service provider choice, revenue analyses and risk profile management. For Kubernetes in particular, application mobility is a valuable data management tool for near real-time analyses and performance evaluation.

As customer use drives the demands for an application, application owners can optimize the mix of cloud environments for each application and risk management system.

The impact of application mobility is its strategic value to short- and long-term planning and operational efforts necessary to protect a Kubernetes application portfolio across its life cycle.

Four Cloud Native Application Mobility Scenarios

For Kubernetes data management platform vendor Kasten by Veeam, application mobility serves four important use cases: cross-cloud portability, cluster upgrade testing, multicloud balancing and data management via spinning off a copy of the data.

Cross-cloud portability for Kubernetes applications is a clear example of application portability supporting application mobility, where application mobility provides seamless behavior for consuming applications during the porting of applications, either to other clouds or to upgraded clusters, respectively.

In Kubernetes, containerized applications are independent from the underlying infrastructure. This independence allows for transfer across a variety of platforms, including on-premises, public, private and hybrid cloud infrastructures.

The key metric for Kubernetes application portability is the mean time to restore (MTTR) — how fast an organization can restore applications from one cluster to another.

Cluster upgrade testing is crucial for business owners who want to manage Kubernetes changes by predictably migrating applications to an upgraded cluster. The ability to catch and address upgrade-related issues as part of a normal operating process is imperative.

The key metric for cluster upgrade testing is the ability to catch important changes before they become a problem at scale so that the organization can address the problems, either by restoring individual components or the entire application.

Multicloud load balancing is an example of application mobility that doesn’t call upon portability, as an API gateway directs traffic and handles load balancing across individual cloud instances. In fact, API gateways enable load balancing across public and private clouds and enable organizations to manage applications according to the business policies in place.

The key metrics for multicloud load balancing center on managing cost, risk and performance in real time as the load balancing takes place.

Finally, data management leverages portability to support application mobility. An organization might use a copy of production data to measure application performance, data usage or other parameters.

Such activities depend on the seamless behavior across live and copied data, behavior that leverages application mobility to spin data to an offline copy for both data analysis as well as data protection once an application or service has begun production.

Key metrics for data management include measures of live application and service data performance, data usage and other characteristics of the current application data set.

The Intellyx Take

The distinction between Kubernetes application portability and mobility is subtle, but important.

Portability is, in essence, one layer of abstraction below mobility, as it focuses on the physical movement of application components or workloads.

Application mobility, in contrast, focuses on making the consumption of application resources location-independent, allowing for the free movement of those consumers as well as the underlying resources.

Given that Kubernetes is infrastructure software, such consumers are themselves applications that may or may not directly affect the user experience. Furthermore, the workloads running on that infrastructure are themselves abstractions of a collection of ephemeral and persistent elements.

Workloads may move, or they may run in many places at once, or they may run in one place and then another, depending on the particular use case. When consuming applications are none the wiser, the organization can say that they have achieved application mobility.

The post Unleashing the Power of Kubernetes Application Mobility appeared first on The New Stack.

]]>
Create a Samba Share and Use from in a Docker Container https://thenewstack.io/create-a-samba-share-and-use-from-in-a-docker-container/ Sat, 29 Jul 2023 13:00:16 +0000 https://thenewstack.io/?p=22714223

At some point in either your cloud- or container-development life, you’re going to have to share a folder from the

The post Create a Samba Share and Use from in a Docker Container appeared first on The New Stack.

]]>

At some point in either your cloud- or container-development life, you’re going to have to share a folder from the Linux server. You may only have to do this in a dev environment, where you want to be able to share files with other developers on a third-party, cloud-hosted instance of Linux. Or maybe file sharing is part of an app or service you are building.

And because Samba (the Linux application for Windows file sharing) is capable of high availability and scaling, it makes perfect sense that it could be used (by leveraging a bit of creativity) within your business, your app stack, or your services.

You might even want to use a Samba share to house a volume for persistent storage (which I’m going to also show you how). This could be handy if you want to share the responsibilities for, say, updating files for an NGINX-run website that was deployed via Docker.

Even if you’re not using Samba shares for cloud or container development, you’re going to need to know how to install Samba and configure it such that it can be used for sharing files to your network from a Linux server and I’m going to show you how it’s done.

There are a few moving parts here, so pay close attention.

I’m going to assume you already have Docker installed on a Ubuntu server but that’s the only assumption I’ll make.

How to Install Samba on Ubuntu Server

The first thing we have to do is install Samba on Ubuntu Server. Log into your instance and install the software with the command:

sudo apt-get install samba -y


When that installation finishes, start and enable the Samba service with:

sudo sysemctl enable --now smbd


Samba is now installed and running.

You then have to add a password for any user who’ll access the share. Let’s say you have the user Jack. To set Jack’s Samba password, issue the following command:

sudo smbpasswd -a jack


You’ll be prompted to type and verify the password.

Next, enable the user with:

sudo smbpasswd -e jack

How to Configure Your First Samba Share

Okay, let’s assume you want to create your share in the folder /data. First, create that folder with the command:

sudo mkdir /data


In order to give it the proper permissions (so those users who need access), you might want to create a new group and then add users to the group. For example, create a group named editors with the command:

sudo groupadd editors


Now, change the ownership of the /data directory with the command:

sudo chow -R :editors /data


Next, add a specific user to that new group with:

sudo usermod -aG editors USER


Where USER is the specific user name.

Now, make sure the editors group has write permission for the /data directory with:

sudo chmod -R g+w /data


At this point, any member of the editors group should be able to access the Samba share. How they do that will depend on the operating system they use.

How to Create a Persistent Volume Mapped to the Share

For our next trick, we’re going to create a persistent Docker volume (named public) that is mapped to the /data directory. This is done with the following command:

docker volume create --opt type=none --opt o=bind --opt device=/data public


To verify the creation, you can inspect the volume with the command:

docker volume inspect public


The output will look something like this:

[
{
"CreatedAt": "2023-07-27T14:44:52Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/public/_data",
"Name": "public",
"Options": {
"device": "/data",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]


Let’s now add an index.html file that will be housed in the share and used by our Docker NGINX container. Create the file with:

nano /data/index.html


In that file, paste the following:

Save and close the file.

Deploy the NGINX Container

We can now deploy our NGINX container that will use the index.html file in our public volume that is part of our Samba share. To do that, issue the command:

docker run -d --name nginx-samba -p 8090:80 -v public:/usr/share/nginx/html nginx


Once the container is deployed, point a web browser to http://SERVER:8090 (where SERVER is the IP address of the hosting server), and you should see the index.html file that we created above (Figure 1).

Figure 1: Our custom index.html has been officially served in a Docker container.

Another really cool thing about this setup is that anyone with access to the Samba share can edit the index.html file (even with the container running) to change the page. You don’t even have to stop the container. You could even create a script to automate updates of the file if you like. For this reason, you need to be careful who has access to the share.

Congrats, you’ve just used Docker and Samba together. Although this might not be a wise choice for production environments, for dev or internal services/apps, it could certainly come in handy.

The post Create a Samba Share and Use from in a Docker Container appeared first on The New Stack.

]]>
5 Best Practices for Reducing CVEs in Container Applications https://thenewstack.io/5-best-practices-for-reducing-cves-in-container-applications/ Wed, 26 Jul 2023 15:49:21 +0000 https://thenewstack.io/?p=22714048

All software (and even hardware) contains security vulnerabilities, and the number of publicly reported CVEs (common vulnerabilities and exposures) continues

The post 5 Best Practices for Reducing CVEs in Container Applications appeared first on The New Stack.

]]>

All software (and even hardware) contains security vulnerabilities, and the number of publicly reported CVEs (common vulnerabilities and exposures) continues to grow every day. All software depends on other software, which can include a vulnerability at each level, further exacerbating the risk and reinforcing the urgency for organizations to detect and patch vulnerabilities. Let’s take a look at how to detect CVEs and best practices for reducing them in your container applications through prevention and detection.

Typical Sources of CVEs

There are three common sources of CVEs.

  1. Base image: All distributions (Alpine, Ubuntu, Red Hat, etc.) have built-in libraries that are updated with each distro’s release. If your software is pinned to a base image version that was released a year ago, the underlying libraries would not have been updated since and are likely to contain CVEs.
  2. Code: Your code depends on other software libraries (code dependencies), and if these libraries are pinned to old versions, they can contain CVEs. Although the CVE is not directly related to your code, your code might still be vulnerable if the vulnerable function is being used. This is known as a transitive vulnerability. For example, if you use Kubernetes libraries within your own code and one of the Kubernetes libraries has a CVE, your code can potentially be vulnerable.
  3. Coding language: The language libraries you use could themselves have CVEs. For example, Golang could have some CVEs within its own language that regularly get patched in the new Golang versions.

All of these sources open up your environment to potential CVEs and increase your attack surface.

Detecting CVEs

The work of detecting CVEs is usually carried out by vulnerability scanners, of which there are many free and paid versions. Vulnerability scanners identify everything your software consists of (base image, code, language, etc.) in order to build a list of software components or a software bill of materials (SBOM). This information is checked against the scanner’s own database to identify which of the libraries have vulnerabilities.

There can be variability in scan results from different vulnerability scanners. There are a few reasons for this. First, some scanners might only use CVEs from the national standard databases, such as the U.S. National Vulnerability Database (NVD), while others use additional third-party databases. Second, how a vulnerability scanner goes about identifying libraries within code can differ in the way they detect software binaries and match known vulnerabilities with those software binaries. Third, how scanners handle transitive dependencies (dependencies of dependencies that you import into your code) varies in depth and granularity. All of these considerations can make a difference in the outcome/results of a scan.

All this is to say that vulnerability scanners are not perfect; they can miss things or not get the whole picture. They might not find all sources (everything your software is built of) and might return varying results. Let’s look at some best practices for both prevention and detection that can help guard against these deficits.

Best Practices for Reducing CVEs in Container Applications

It’s important to detect and patch CVEs in a timely manner to ensure the security of your environment/product. The sheer number of reported CVEs combined with multiple potential sources of CVEs can make this a difficult task. Here are five best practices for reducing CVEs in your container applications.

  1. Trim your base image: Including too many libraries in your code and base image increases your attack surface. The best practice is to only include the libraries you need. To do this, start with a scratch image and only copy the specific libraries from the latest base image that is needed. If there are 100 libraries in a base image, only include the 10 that you actually need.
  2. Use the latest and greatest libraries: Consistently follow code updates for the libraries you use, and stay ahead of the curve by updating regularly when new versions are available. If there is a new version of the coding language you use, you should include it in the next release.
  3. Use more than one vulnerability scanner: Since vulnerability scanners can produce different results, you should use more than one scanner in an effort to detect as many potential vulnerabilities as possible. We also recommend checking your code source (such as GitHub) to get even wider coverage in case the scanners miss anything or don’t have access to your code.
  4. Scan often: We recommend performing regular (at least once a week) scans of your software to monitor the status of CVEs. Leading up to a release, increasing the frequency of scans to multiple times a day helps to ensure CVEs do not slip through. Alternatively, it is recommended to integrate vulnerability scanning into your build or deploy CI/CD process if this better fits your organization’s software release process.
  5. Shift left: Traditionally, vulnerability scanning and security assessments have been performed by specialized security teams or external auditors after the code is developed. Shift the responsibility of vulnerability scanning processes closer to development teams to avoid a long and complicated resolution process when CVEs are discovered. Shifting visibility and creating processes to proactively address CVEs for developers empowers them to become more responsible and involved in the security of their own code from the beginning.

Lastly, make sure you have a good relationship with the development team in your organization, as you’ll both need to be on the same page in terms of understanding the importance of security in your environment/product. This is done via security education and training, establishing security guidelines and creating a process that considers the unique processes and needs of your engineering organization.

What to Do After Detecting CVEs

You’ve scanned your environment to identify CVEs and are now staring at a list of 200 things to fix. What next? There’s no way you can analyze them all before the next batch of CVEs comes out, so start by identifying and patching the high-priority CVEs first, and then if you have time, work your way down the list in terms of priority. For a more detailed discussion of patching CVEs, stay tuned for our next article, “Vulnerability Management: Best Practices for Patching CVEs.”

Read our guide to learn more about Kubernetes vulnerability scanning.

The post 5 Best Practices for Reducing CVEs in Container Applications appeared first on The New Stack.

]]>
Deploy a Docker Swarm on Rocky Linux https://thenewstack.io/deploy-a-docker-swarm-on-rocky-linux/ Sat, 22 Jul 2023 13:00:48 +0000 https://thenewstack.io/?p=22713194

You may have heard recently that  has, in my opinion, gone against the heart and soul of open source. If

The post Deploy a Docker Swarm on Rocky Linux appeared first on The New Stack.

]]>

You may have heard recently that Red Hat has, in my opinion, gone against the heart and soul of open source. If you’ve not heard, essentially it is putting up a paywall such that every clone of RHEL cannot access the source without paying up. Because of that, many users are migrating to alternatives, such as Rocky Linux.

But as with RHEL, Rocky Linux defaults to Podman as its default container runtime engine. If you’ve been using Docker for years, Podman is a good option but it’s not exactly a 1:1 migration. And so, if Docker is your preferred container deployment tool, you’ll be happy to know that it’s possible to not only install Docker on Rocky Linux but to also deploy a full-blown Docker Swarm on the platform. Even better, you can install Docker on Rocky Linux without removing Podman, so you can enjoy the best of both worlds.

I’m going to show you how to do exactly that. Once complete, you’ll feel right at home deploying and managing Docker containers on this Red Hat Enterprise Linux clone that continues to honor the spirit of open source.

What You’ll Need

To deploy a Docker Swarm on Rocky Linux you’ll need at least two running instances of the open source operating system and a user with sudo privileges. That’s it. Let’s make this happen.

Install the Necessary Dependency

The first thing we’re going to do is install the necessary dependency. This is done on all instances of Rocky Linux. Log into the first instance and run the command for the first dependencies like so:

sudo dnf install dnf-utils -y


When that installation completes, you’re ready to move on to the Docker installation.

Install Docker

Again, this is done on all instances of Rocky Linux.

Add the required repository with the command:

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo


Once you’ve added the repository, you can then run the command to install everything necessary for Docker Swarm. That command is:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y


With Docker installed, you’ll want to start and enable the service with the command:

sudo systemctl enable --now docker


Finally, add your user to the docker group with:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

Open the Firewall

Before we deploy Docker Swarm, we must first open a few ports in the firewall, which is achieved with the following commands (run on all instances of Rocky Linux):

sudo firewall-cmd --add-port=2377/tcp --permanent
sudo firewall-cmd --add-port=7946/tcp --permanent
sudo firewall-cmd --add-port=7946/udp --permanent
sudo firewall-cmd --add-port=4789/udp --permanent


Reload the firewall with the command:

sudo firewall-cmd --reload

Initialize the Swarm

With all of that out of the way, we can now initialize the Swarm. This is done on the machine that will serve as your controller. You will be required to know the IP address of your Swarm manager before you run this command. For example’s sake, let’s say the IP address of our Swarm manager is 192.168.1.192. Go back to that machine and issue the command:

docker swarm init --advertise-addr 192.168.1.192


The output of the above command will include the command you must run on all Docker nodes that will join the Swarm. That command will look like this:

docker swarm join --token SWMTKN-1-42cpopou4iljvubx00z53uvj2oc9muqtjucbryrnw97smnwcwm-e4mp25qupifa19xuxzlg6ic6u 192.168.1.192:2377


Of course, the random string of characters and IP address will be different for your instance.

Copy that command and run it on the first node that will join your Swarm.

Upon successfully joining the Swarm, the output of the command will include:

This node joined a swarm as a worker.

Testing the Node

You can verify the connection by going back to the controller and issuing the command:

docker info


Within the output of the command, you should see a section that looks something like this:

Swarm: active
NodeID: iy5c7w7s6zgd8vsr22gnrl1uz
Is Manager: true
ClusterID: qkkabtj3db6niknzgwyph2l98
Managers: 1
Nodes: 2


As you can see, the swarm is active and there are currently two nodes attached. You can then add more nodes by first running the init command again, copying the command in the output, and running it on the next node. Every time you add a new node, you must run the init command to get a unique join command for the new node.

Another simple test is to deploy a service to the new Docker Swarm. Let’s deploy a simple NGINX service with the command:

docker service create --name nginx_test nginx


That command should successfully deploy an NGINX service to a single node. If you want to deploy the service to 2 nodes, the command would be:

docker service create --replicas 2 --name nginx_test nginx


For 3 nodes, the command would be something like this:

docker service create --replicas 3 --name nginx_test nginx


If you’ve already deployed the single node service, you can scale it to however many nodes you need with a simple command. Let’s say you have five nodes joined to your Swarm. To scale that service to five, the command would be:

docker service scale nginx_test=5


Congratulations, you’ve just deployed your first Docker Swarm on Rocky Linux.

The post Deploy a Docker Swarm on Rocky Linux appeared first on The New Stack.

]]>
A Modern Approach to Securing APIs https://thenewstack.io/a-modern-approach-to-securing-apis/ Fri, 21 Jul 2023 13:13:36 +0000 https://thenewstack.io/?p=22713563

The nature of applications has changed from monolithic applications deployed on individual web servers to containerized, cloud native applications distributed

The post A Modern Approach to Securing APIs appeared first on The New Stack.

]]>

The nature of applications has changed from monolithic applications deployed on individual web servers to containerized, cloud native applications distributed across a cluster of nodes. With the emergence and popularity of cloud native applications, APIs have become critical to the modern applications we rely on — from mobile to SaaS — as organizations share vast amounts of data with customers and external users.

On average, modern web applications depend on 26 to 50 APIs to power them, which has expanded the attack surface of an application, increasing the need to secure APIs for stronger authentication, authorization and data privacy to keep applications running. Additionally, with the widespread adoption of public clouds to host web applications, security and developer teams contend with a perimeter-less network that constantly exposes applications to internet-borne threats. For instance, in a recent report, nine out of the top 10 vulnerabilities on internet-facing cloud hosts were found to belong to web/API applications.

Unsecured APIs make easy targets for threat actors to gain access to valuable data and resources within applications across the enterprise. Take the recent Optus breach, for example. It exposed 10 million customer accounts due to an internet-facing API that did not require authorization or authentication to access customer data.

The reality is that 75% of organizations are deploying new or updated code to production weekly, and almost 40% commit new code daily. Security teams struggle to keep pace with application developers to ensure all APIs are secure. In a recent report, 41% of organizations say that their security teams lack visibility and control in the development process, and 40% report new builds are deployed to production with misconfigurations, vulnerabilities and other security issues.

A Holistic, Agile and Modern Approach to Securing APIs

You can’t protect what you cannot find. Security teams need visibility across their potential API attack surface, which many of them don’t have. Although many may have multiple API security products in place, 92% of organizations experienced an API-related security incident last year.

In a perfect world, each API would be registered and monitored; however, that’s not the reality with the pace of the modern engineering ecosystem. As more cloud native applications are developed and deployed, the number of microservices and, in turn, the number of APIs grow and become increasingly difficult to monitor and secure.

Additionally, because APIs are increasingly exposed to the public, organizations must address data exposure risks by implementing best practices to minimize the attack surface, remediate vulnerabilities and prevent threats in real time without slowing down developers. Three practical approaches to API security focus on:

  • API risk profiling: Due to the speed of application development, it’s challenging, without automation, to track risks associated with APIs. Risk profiling allows security teams to minimize the API attack surface and manage protection based on risk factors, such as misconfigurations, exposure to sensitive data and lack of authentication.
  • Shifting left with application security: While you should monitor and inspect web applications and API traffic within production environments to detect risks, developers and security practitioners should strive to catch risks before applications are deployed into production. By shifting security left, vulnerabilities and misconfigurations can be fixed in real time while the developer is still building the application, which facilitates a more efficient and less costly resolution of security problems before the software is in production.
  • Multiple layers of defense: Even if attackers circumvent one protection, you can keep them from compromising the entire environment without affecting additional resources. For cybersecurity practitioners, this is known as the “Swiss cheese paradigm.” In this analogy, the cybersecurity risk of a threat becoming a reality is mitigated by different layers and types of defenses that sit on top of one another to prevent a single point of failure. The top layer should include visibility and monitoring of the attack surface, which can auto-discover all web applications and API endpoints. The second layer should include policies for HTTP requests and API calls to block malicious threats. The third layer should include vulnerability and compliance scanning that implements strong authentication to further product applications. Lastly, teams should look to protect their workloads or the infrastructure layer, such as hosts, VMs, containers and serverless functions that host the applications. Together, these layers decrease the chances of a successful attack.

The complexities of today’s cloud native, API-centric web applications and the microservices they leverage are full of new security challenges that require strategies and solutions to complement conventional approaches to security while keeping developers top of mind. Developers and security practitioners should work together to find solutions that enable scalable, flexible, multilayered security that works for any type of workload in any environment or cloud architecture.

The post A Modern Approach to Securing APIs appeared first on The New Stack.

]]>
View the Resource Usage of Your Docker Containers https://thenewstack.io/view-the-resource-usage-of-your-docker-containers/ Sat, 15 Jul 2023 13:00:56 +0000 https://thenewstack.io/?p=22712669

What happens when you have a number of Docker containers running and something goes awry? Do you panic and stop

The post View the Resource Usage of Your Docker Containers appeared first on The New Stack.

]]>

What happens when you have a number of Docker containers running and something goes awry? Do you panic and stop all containers? No. You investigate… right? That’s what system administrators have done for years: they locate the problem and then find a solution. The same thing should hold true with containers.

One place you might start your investigation is resource usage. After all, when you have a number of deployed containers, they’ll each use resources differently and you might have to track down the one that’s causing problems. It’s the same as standard applications on a server. When something goes wrong, you might first check the resource usage of those apps to see which one is the problem.

So, why not do the same with your Docker containers? After all, what container developer or admin doesn’t want to know how their deployments are using resources?

Fortunately, there are a few ways to handle this task and I’m going to show you two of them… one from the command line and one from the Docker Desktop GUI. I’ll even show you how to do a quick WordPress deployment so you’ll have at least two containers to monitor.

First, let’s deploy WordPress.

What You’ll Need

To follow along with this, you’ll need the following things:

  • Docker and docker-compose are installed on your platform of choice.
  • Docker Desktop installed (if you want to do this the GUI way).

Deploying WordPress with Docker

The first thing we’re going to do is deploy a basic WordPress container. To do that, we’re going to use the docker-compose command. First, create the docker-compose file with the command (or use whatever text editor you prefer):

nano docker-compose.yml


In that file, paste the following content:

Make sure to change the username and password as needed. Remember, this is only for testing purposes, so you can opt to leave the above compose file as is.

Save and close the file.

Deploy the containers with the command:

docker-compose up -d


Both MySQL and WordPress containers will deploy. You can verify that with the command:

docker ps

Check the Resource Usage from the Command Line

With your new containers running, issue the command:

docker stats

You should see output similar to this:

CONTAINER ID   NAME               CPU %     MEM USAGE / LIMIT    MEM %     NET I/O           BLOCK I/O        PIDS
c6ba085f1adc   jack-wordpress-1   0.01%     51.72MiB / 3.58GiB   1.41%     75.5kB / 150kB    4.65MB / 58MB    1
c0308bfc83c1   jack-mysql-1       0.09%     192.1MiB / 3.58GiB   5.24%     44.6kB / 39.1kB   1.16MB / 291MB   28
005cb27fc012   vigorous_morse     0.00%     7.398MiB / 3.58GiB   0.20%     1.67kB / 0B       344kB / 12.3kB   9

The above command will display the Container ID, Name, CPU percentage, memory usage and limit, memory percentage, and NET I/O for each running container. One thing to note is that you don’t receive your prompt back, because the stats command displays real-time information about the running containers. Because of this, you can watch the stats of those containers as they are used.

If you have a large number of containers running, you might want to view the output one container at a time. To do that, you must have the container ID. To find the ID of a container, issue the command:

docker ps


Let’s say you have a container with ID c6ba085f1adc and you want to view its stats. For that, the command would be:

docker stats c6ba085f1adc


The output of the above command will only display the real-time stats for that one container.

Check the Resource Usage with Docker Desktop

If Docker Desktop is your tool of choice, you can view the resources of your running containers with the help of a handy extension. To add the extension, open Docker Desktop, click Add Extensions in the sidebar, type Resource Usage in the search field, and then click Install associated with the app (Figure 1).

Figure 1: Installing the Resource Usage for Docker Desktop.

Figure 1: Installing the Resource Usage for Docker Desktop.

After the extension has been installed, you’ll see Resource usage listed in the sidebar. Click that entry to view a real-time listing of each deployed container (Figure 2).

Figure 2: Container resource usage as viewed from the Docker Desktop GUI.

Figure 2: Container resource usage as viewed from the Docker Desktop GUI.

One of the nice things about the Docker Desktop Resource Usage extension is that you can also stop and start containers from that listing. You also get an at-a-glance view of overall resource usage and even change the refresh rate for the extension (from every 1 second to 5 minutes).

For those who prefer their statistics displayed in charts, click the Chart View tab to see charts for CPU, Memory, Disk R/W, and Net I/O (Figure 3).

Figure 3: A graphical representation of resource usage is available in Docker Desktop.

Figure 3: A graphical representation of resource usage is available in Docker Desktop.

And that’s all there is to view the resource usage of your Docker containers. Both of these methods will give you plenty of information to start troubleshooting your containers. This may not be the be-all-end-all collection of information, but it’ll allow you to get a peak into the efficiency of your deployments.

The post View the Resource Usage of Your Docker Containers appeared first on The New Stack.

]]>
Hadolint: Lint Dockerfiles from the Command Line https://thenewstack.io/hadolint-lint-dockerfiles-from-the-command-line/ Sat, 08 Jul 2023 13:00:37 +0000 https://thenewstack.io/?p=22712333

The dirty little secret regarding containers is that it’s not always as easy as you might expect to to be.

The post Hadolint: Lint Dockerfiles from the Command Line appeared first on The New Stack.

]]>

The dirty little secret regarding containers is that it’s not always as easy as you might expect to to be. Case in point, have you ever crafted a Dockerfile by hand, only to have it fail to run? It can be very frustrating. From YAML indentation, using an inappropriate image, improperly using tags, and wrong volume mapping… there are so many issues that can cause Dockerfiles to fail.

That’s why you need linting.

No, I’m not talking about the fluff that builds up in your clothes dryer. I’m talking about the automated checking of code for programmatic and stylistic errors.

Fortunately, linting is done manually, as that would not only be very time-consuming but can also lead to errors on top of errors. It’s like a writer editing their own work… most often they don’t see every error. The same thing holds true with developers. Sometimes you need either a fresh pair of eyes or a tool specifically created for this purpose.

Hadolint mascot

There are plenty of tools out there, some of which are paid services that allow you to upload Dockerfiles (and other bits of code) to have them linted. There are also desktop apps you can use for the purpose of linting. If you prefer the command line, there are plenty of options available, one of which is called Hadolint.

Hadolint is a command line tool that helps you ensure your Dockerfiles follow best practices and parses your Dockerfile into an abstract syntax tree (AST), after which it runs a pre-defined set of rules with the help of ShellCheck (another script analysis tool) to lint the code.

Let’s find out how to use Hadolint to ensure your Dockerfiles are following best practices and aren’t filled with problems you might not be able to see. I’m going to demonstrate on Ubuntu Server 22.04, but Hadolint is available for installation on Linux, macOS, and Windows.

Fortunately, Hadolint isn’t just available to run locally. If you already have Docker installed, you can run the Hadolint container against your Dockerfile. I’ll show you how to do that as well.

First, let’s go the local route.

How to Install Hadolint

Log into your Ubuntu Server instance, and first install ShellCheck with:

sudo apt-get install shellcheck -y


Once that is installed, download Hadolint with the command:

wget https://github.com/hadolint/hadolint/releases/download/v2.12.0/hadolint-Linux-x86_64


Note: Make sure to check the Hadolint download page to ensure you’re downloading the latest version.

Once the file has downloaded, move it (while also renaming it) to a directory in your $PATH with a command such as:

sudo mv hadolint-Linux-x86_64 /usr/local/bin/hadolint


Next, give the file executable permission with:

sudo +x /lusr/local/bin/hadolint


You can verify it’s working with the command:

hadolint --help


If you see the help page printed out, you’re good to go.

Lint your Dockerfile Locally

For testing purposes, I used an old Dockerfile I had just lying around. Create the file with the command:

nano Dockerfile


Paste the following contents into that file:

#
# Base the image on the latest version of Ubuntu
FROM ubuntu:latest
#
# Identify yourself as the image maintainer (where EMAIL is your email address)
LABEL maintainer="EMAIL""
#
# Update apt and update Ubuntu
RUN apt-get update && apt-get upgrade -y
#
# Install NGINXl
RUN apt-get install nginx -y
#
# Expose port 80 (or whatever port you need)
EXPOSE 80
#
# Start NGINX within the Container
CMD ["nginx", "-g", "daemon off;"]


Save and close the file.

Now, we can lint the file with Hadolint like so:

hadolint Dockerfile


The output should look something like this:

Dockerfile:3 DL3007 warning: Using the latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag
Dockerfile:9 DL3009 info: Delete the apt-get lists after installing something
Dockerfile:12 DL3015 info: Avoid additional packages by specifying --no-install-recommends
Dockerfile:12 DL3008 warning: Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>

Go through the output and make changes to your Dockerfile as needed. Once you’ve made the changes, re-lint the file and, hopefully, you’ve resolved any issues.

Lint Your Dockerfile with the Hadolint Docker Container

If you don’t want to bother installing Hadolint on your machine, you can always run the containerized version of the tool against your locally stored Dockerfile. Of course, for that, you’ll need Docker installed. If you don’t already have Docker available, here are the quick steps for installing it on Ubuntu Linux:

  1. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  2. echo “deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  3. sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release curl git -y
  4. sudo apt-get update
  5. sudo apt-get install docker-ce docker-ce-cli containerd.io -y
  6. sudo usermod -aG docker $USER
  7. Log out and log back in.

With Docker installed, you can easily lint your Dockerfile with the Hadolint Docker container like so:

If you used the same Dockerfile from earlier (without having changed anything), you should see the same output.

And that’s how you can easily lint your Dockerfiles from the command line. To find out more about how to use Hadolint, make sure to look through the help information (using the Hadolint –help command) to see the different options available. But for basic Dockerfile linting, the straight-up Hadolint command works like a charm.

The post Hadolint: Lint Dockerfiles from the Command Line appeared first on The New Stack.

]]>
How Containers, LLMs, and GPUs Fit with Data Apps https://thenewstack.io/how-containers-llms-and-gpus-fit-with-data-apps/ Fri, 30 Jun 2023 20:29:02 +0000 https://thenewstack.io/?p=22712432

Containers, large language models (LLMs), and GPUs provide a foundation for developers to build services for what Nvidia CEO Jensen

The post How Containers, LLMs, and GPUs Fit with Data Apps appeared first on The New Stack.

]]>

Containers, large language models (LLMs), and GPUs provide a foundation for developers to build services for what Nvidia CEO Jensen Huang describes as an “AI Factory.”

Huang made the statement at the launch of the Snowflake Summit in Las Vegas this week, where Nvidia and Snowflake demonstrated the power that comes with containers as the emerging foundation for generative AI distribution through application architectures that integrate with enterprise data stores.

Announced at the conference and now in private beta, Snowflake’s Snowpark Container Service (SCS) offers the ability to containerize LLMs; through Snowpark, Snowflake will offer Nvidia’s GPUs and NeMO, Nvidia’s “end-to-end, cloud native enterprise framework to build, customize, and deploy generative AI models.

Snowflake’s investment in container services demonstrates how the company has transformed from its roots as a data warehouse service provider to a data analytics company and now as a company that offers a platform for container services and native application development.

Snowpark serves as the home for SCS. It offers a platform to deploy and process Python, Java, and Scala code in Snowflake through a set of libraries and runtimes, including User Defined Functions (UDFs) and stored procedures.

SCS will offer developers an additional Snowpark runtime option with the capability to “manage, and scale containerized workloads (jobs, services, service functions) using secure Snowflake-managed infrastructure with configurable hardware options, such as GPUs. ”

Hex is an early user of the service. Its CEO and co-founder Barry McCardel describes Hex as a Figma for data or a Google Docs for data. The service allows data analysts and data scientists to work on one collaborative platform.

“We’re using it (SCS) to deploy our software, and it’s interesting because Hex today deploys on Amazon Web Services,” he said. “We’re going to be launching on GCP later this year. But the environment I’m actually most excited about is Snowflake, because I think that being able to run Hex workloads where the users customer data is without having to go through that additional process is very, very compelling.”

The business case for SCS comes down to compliance and governance, which people like McCardel cite as a Snowflake core value. With SCS, a company like Hex can offer customers a way to build applications on their data in Snowflake.

“If we were to try to transact with them outside of Snowflake Container Services, we could have months, years of security reviews and InfoSec stuff, because a lot of those bigger customers are just very cautious, on you know, whether they’re going to let some third party application connect to their data,” McCardel said. “So the way it’ll be for our customers is, it’ll actually just be super seamless. They’ll be able to use Hex on top of their existing data store with minimal effort, minimal overhead.”

With SCS, Snowflake may offer services that relieve much of the operations burdens that slowed teams in synchronizing data. Before SCS, the governed data that Snowflake managed would get moved off the platform to get containerized, creating an administrative burden.

SCS, built on Kubernetes, works in the background for users. It’s opinionated for the Snowflake environment. Customers may import their containers into SCS with little of the operations overhead that they had before.

“Containers were often a reason why customers were taking their data out of Snowflake and putting it somewhere else, oftentimes also introducing redundant copies of the data in let’s say, cloud storage somewhere,” said Torsten Grabs, a senior product manager with Snowflake. “And then they had containerized compute run over that data to do some processing. And then sometimes the results came back into Snowflake. But as soon as you create these redundant copies of your data, then it’s very hard to maintain them over time to keep them in sync. What is your version of truth? Is data governance managed in a consistent way in all these places? So it is much simpler if you can bring the work that these containers are doing to where you have the data and then apply the processing to the data where it sits. That was one of the key motivating factors for us to bring containers over into its platform.”

And here’s what’s key. Customers would often need to orchestrate with containers elsewhere where GPUs, such as AI and machine learning workloads, fulfilled their computational needs. So if the user required GPU-backed computations, then they had to save the data in Snowflake and get it to where they could perform the computing.

Snowflake will build on approaches similar to offering data warehouse services when the company started.

“When you’re going to create one of these services, you get to specify the instance that you want to run,” said Christian Kleinerman, senior vice president of product at Snowflake, in a conversation at the Snowflake Summit. “It’s our own mapping of a reduced selection of logical instances. So you will be able to say high memory, low memory, GPU, non-GPU, and then we’ll map it to the right instance on each of the three cloud providers.”

SCS opens opportunities for the use of foundation models in Snowflake.

“One is you bring your own model or you take in various models and you run a container,” Kleinerman said,” You do the heavy lifting or the other one is using some third party. They can do a configuration, and publish that as a native app. And then the Snowflake customer that wants to just be a consumer, could see just a function. And now I know nothing about AI or ML or containers. And I am using foundation models.”

The Nvidia Connection

NeMO has two main components, Kleinerman said. It comes with certain models trained by Nvidia. It comes as an entire framework, including APIs and a user interface. It helps train a model from scratch or fine-tune it with data fed into the model. The NeMO framework will get hosted inside SCS. NeMo itself comes as a container, allowing model portability into SCS.

Models may get imported and then built on top of NeMO. For example, Snowflake announced Reka as a partner. Reka, which just launched, makes generative models. AI21Labs is also a foundation model partner.

Kari Anne Briski is Nvidia’s vice president of AI Software. Briski said Nvidia is ahead of almost everyone in model development. Snowflake used the Snowflake Summit to announce it will use Nvidia’s GPUs and its training models for developers to build generative AI applications. Briski said Snowflake customers may use large foundation models built on Nvidia’s offerings.

Briski traces her work at Nvidia as a timeline of AI development, illustrating how Snowflake will benefit from Nvidia’s research. Seven years ago, Nvidia accelerated computer vision on a single GPU. Today, Nvidia uses thousands of GPUs to train its foundation models.

Briski said it still takes weeks to months to train a foundation model. By offering pre-trained models, Briski said the will use far less compute.

A team may customize the model at runtime, using “zero shots or a few shots,” learning, which provides ways to provide answers with little data provided, she said.

“So you can send in prompts, a couple of examples, and prompts at runtime to help it customize and make “‘Oh, I know what you’re talking about.’ “Now, I’m going to follow your lead.”

The option of prompt tuning or parameter-efficient fine-tuning (PEFT) allows people to use dozens or hundreds of examples.

“We train a smaller model that the large language model uses so you have this customization model,” Briski said. “We can have hundreds or thousands of customization models.”

According to a Hugging Face blog post, PEFT means the user only fine-tunes a small number of (extra) model parameters while freezing most parameters of the pre-trained LLMs, thereby greatly decreasing the computational and storage costs.”

All the weights across the network may also change, but that comes with more intensive computing requirements.

LLMs may become vulnerable to hallucinations if used in isolation, which makes a case for vector databases.

“Again, you don’t want to just think of the LLM by itself,” Briski said. “You might think of it as an entire system. You also do fine-tuning for these. So there’s a retriever model, kind of like you look up in a database.”

But overall, the concept of containers, LLMs, and GPUs means faster capabilities, more robust offerings, and the realization that we now can talk to our data, which signals a new age, Huang said in a fireside chat with Snowflake CEO Frank Slootman.

“We’re all going to be intelligence manufacturers in the future,” Huang said. “We will hire employees, of course, and then we will create a whole bunch of agents. And these agents could be created with Langhain or something like that, which connects models, knowledge bases, and other AIs that you deploy in the cloud and connect to all the Snowflake data. And you’ll operate these AIs at scale. And you’ll continuously refine these AIs. And so every one of us is going to be manufacturing AI, so we’re going to be running the AI factories.”

Disclosure: Snowflake paid for the reporter’s airfare and hotel to attend Snowflake Summit.

The post How Containers, LLMs, and GPUs Fit with Data Apps appeared first on The New Stack.

]]>
High Performance Computing Is Due for a Transformation https://thenewstack.io/high-performance-computing-is-due-for-a-transformation/ Tue, 27 Jun 2023 17:00:48 +0000 https://thenewstack.io/?p=22710739

Back in 1994 — yes, almost 30 years ago! — Thomas Sterling and Donald Becker built a computer at NASA

The post High Performance Computing Is Due for a Transformation appeared first on The New Stack.

]]>

Back in 1994 — yes, almost 30 years ago! — Thomas Sterling and Donald Becker built a computer at NASA called the Beowulf.

The architecture of this computer (aka the Beowulf cluster) comprised a network of inexpensive personal computers strung together in a local area network so that processing power could be shared among them. This was a groundbreaking example of a computer that was specifically designed for high-performance computing (HPC) and that was exclusively composed of commodity parts and freely available software.

The Beowulf cluster could be used for parallel computations in which many calculations or processes are carried out simultaneously between many computers and coordinated with message-passing software. This was the beginning of Linux and open source for HPC, and that made the Beowulf truly revolutionary. For the next 10ish years, more and more people followed the Beowulf model. In 2005, Linux took the No. 1 position at top500.org, and it’s been the dominant operating system for HPC ever since.

The basic architecture of a Beowulf cluster starts with an interactive control node(s) where users log in to and interact with the system. The compute, storage and other resources are all connected to a private network (or networks). The software stack includes Linux, operating system management/provisioning (e.g., Warewulf), message passing (MPI), other scientific software and optimized libraries and a batch scheduler to manage the user’s jobs.

Image Source: CIQ

Over time, these systems have become more complicated with multiple tiers of storage and groups of compute resources, but the basic Beowulf framework has remained the same for thirty years. So, too, has the HPC workflow; from a user perspective, we have not made lives easier for HPC consumers for over three decades now! Generally, every HPC user has to follow the same general steps for all HPC systems:

  1. SSH into interactive node(s).
  2. Research and understand the storage system configuration and mount points.
  3. Download source code to the right storage path.
  4. Compile the source code taking into consideration the system or optimized compilers, math libraries (and locations), MPI, and possibly storage and network architecture.
  5. Upload data to compute onto the right storage path (which might be different from source code path above).
  6. Research the resource manager queues, accounts, and policies.
  7. Test and validate the compiled software against test data.
  8. Monitor job execution and verify proper functionality.
  9. Validate job output.
  10. Repeat as necessary.
  11. Download the resulting data for post-processing or further research.

The Ever-Growing Cost of Using a 30-Year-Old HPC Architecture

Our continued use of the legacy HPC framework is exacting a costly toll on the scientific community by way of lost opportunities, unclaimed economies of scale and shadow IT costs.

Lost opportunities include the researchers and organizations that cannot make use of the legacy HPC computing architecture and instead are stuck using non-supportable, non-scalable and non-professionally maintained architectures. For example, I’ve met multiple researchers using their laptops as their computing infrastructure.

Other lost opportunities include the inability to accommodate modern workloads, many of which are insufficiently supported by the legacy HPC architecture. For example, it is nearly impossible to securely integrate the traditional HPC system architecture into CI/CD pipelines for automated training and analytics; simpler development and resource front-ends such as Jupyter (discussed later); jobs of ever-increasing diversity; and multi-prem, off-prem and even cloud resources.

Also, many enterprises have demonstrated resistance to legacy system architectures like Beowulf. “We don’t want our system administrators using Secure Shell (SSH) anymore, and Beowulf requires all users using SSH to interface with the system!”

When IT teams have to build custom systems for particular needs and usage (which is what is happening now at many scientific centers), they cannot leverage the hardware investments effectively because each “system” exists as an isolated pool of resources. We are seeing this now with centers building completely separate systems for compute-based services and Jupyter with Kubernetes. Going unclaimed are the economies of scale that could be achieved if HPC resources properly supported all of these use cases.

Moreover, in far too many cases research teams are trying to build their own systems or using cloud instances outside of IT purview, because they feel IT is not providing them the tools that they need for their research. While the cloud has made it easy for some forms of computation, it doesn’t always make sense over local on-prem resources or if you’re locked into a single cloud vendor.

These unfortunate truths are stifling research and scientific advancements.

Hints of Progress?

Certainly, a few things have come along that have made the experience for HPC users a bit easier. Open OnDemand, for example, is a fantastic way to encapsulate the entire Beowulf architecture and give it back to the user as an http-based (i.e., web-based) graphical interface. OnDemand offers great value in providing a more modern user interface (UI) than SSH, but many sites have found that it has not significantly lowered the barrier of entry because the user still has to understand all of the same steps outlined above.

Another improvement is Jupyter Notebooks, which has been a huge leap in terms of making life better for researchers and developers. Often used in academia for teaching purposes, Jupyter helps researchers do real-time development and run “notebooks” using a more modern interactive, web-based interface. With Jupyter, we’re finally seeing the user’s experience evolving — the list of steps is simplified.

However, Jupyter is not generally compatible with the traditional HPC architecture, and, as a result, it has not been possible to integrate with existing HPC architectures. As a matter of fact, a number of traditional HPC centers run their traditional HPC systems on one side, and they use their Jupyter system on the other side to run on top of Kubernetes and enterprise-focused infrastructures. True, you can use Open OnDemand plus Jupyter to merge these approaches, but that recomplicates the process for users — adding more and different steps that make the process difficult.

Containers Lead the Way to a More Modern HPC World

Containers have served as a “Pandora’s Box” (in a good way!) to the HPC world by demonstrating that there are numerous innovations that have occurred in the non-HPC spaces which can be quite beneficial to the HPC community.

The advent of containers in enterprise was via Docker and the like, but these container implementations required privileged root access to operate and thus would open up security risks to HPC systems by allowing non-privileged users access to run containers. That’s why I created the first general-purpose container system for HPC — Singularity — which immediately was adopted by HPC centers worldwide due to the massive previously unmet demand. I have since moved Singularity into the Linux Foundation to guarantee that the project will always be for the community, by the community and free of all corporate control. As part of that move, the project was renamed to Apptainer.

Apptainer has changed how people think about reproducible computing. Now applications are much more portable and reusable between systems, researchers and infrastructures. Containers have simplified the process of building custom applications for HPC systems as they can now easily be encapsulated into a container that includes all of the dependencies. Containers have been instrumental in starting the process of HPC modernization, but it is just the first step to making lives better for HPC users. Imagine what comes next as we approach the transformation of HPC driving the next generation HPC environments.

What Is to Come?

It is time for the computing transformation: the generalized HPC architecture needs to be modernized to be able to better provide for a wider breadth of applications, workflows and use cases. Taking advantage of modern infrastructure innovations (cloud architecture, hardware such as GPUs, etc.), we must build HPC systems that support not only the historical/legacy use cases but also the next generation of HPC workloads.

At CIQ, we’re currently working on this and have been developing a solution that will make HPC approachable for users of all experience levels. The vision is to provide a modern cloud native, hybrid, federated infrastructure that will run clusters on-premises and multipremises, in the cloud and multicloud, even in multiple availability regions in multiclouds.

A gigantic distributed computing architecture will be stitched together with a single API, offering researchers total flexibility in locality, mobility, gravity and data security. In addition, we aim to abstract away all the complexity of operation and minimize the steps involved in running HPC workflows.

Our goal is to enable science by modernizing HPC architecture — both to support a greater breadth of job diversity and to lower the barrier of entry to HPC to more researchers, optimizing the experience for all.

The post High Performance Computing Is Due for a Transformation appeared first on The New Stack.

]]>
Get up to Speed with Containers Very Quickly with DockSTARTer https://thenewstack.io/get-up-to-speed-with-containers-very-quickly-with-dockstarter/ Sat, 24 Jun 2023 13:00:20 +0000 https://thenewstack.io/?p=22711186

I’m on a constant hunt for applications that help ease the complexity of Docker container deployments. I recently found an

The post Get up to Speed with Containers Very Quickly with DockSTARTer appeared first on The New Stack.

]]>

I’m on a constant hunt for applications that help ease the complexity of Docker container deployments. I recently found an app that just might be of considerable use to those who are new to Docker and need to get a leg up on using the container tool.

One such app caught my attention. The app is called DockSTARTer and it aims to make getting up to speed with Docker quick and easy. DockSTARTer is a curses-based terminal application, which allows you to easily deploy containerized applications all the while learning about some of the advanced configurations with Docker.

DockSTARTer is very much a tool for those new to Docker to use as a learning environment. So, for those who are experts with the Docker command line, this tool probably isn’t for you. If, on the other hand, you’re just getting started with Docker, you’ll be glad you have this tool on your side.

Basically, what DockSTARTer does is, via an easy-to-navigate menu system, walk you through the deployment of containerized applications, asking simple questions and allowing you to edit or customize the configurations (if needed). With DockSTARTer, you can easily set app/VPN/Global variables, without having to create an .env file or a complicated manifest. DockSTARTer makes this all very easy.

Let me show you how to install everything and then deploy an application with DockSTARTer.

Installing Docker and Docker Compose

If you already have Docker and Docker Compose installed, skip this step. I’m going to demonstrate the process on Ubuntu Linux 22.04. If you’re using a different distribution of Linux, you’ll need to modify the installation commands to suit your package manager.

We’ll first install the community edition of Docker. Add the required GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] 

https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


We then must install a few necessary dependencies with:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release curl git -y


Update apt with:

sudo apt-get update


Install the latest version of the Docker CE runtime engine with the command:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with the command:

sudo usermod -aG docker $USER


The last step is to install Docker Compose with:

sudo apt-get install docker-compose -y


Log out and log back in for the changes to take effect.

Installing DockSTARTer

The developer of DockSTARTer has created an easy-to-use installation script. Run this command to install the software:

bash -c "$(curl -fsSL https://get.dockstarter.com)"


Once the installation is complete, you must reboot your machine with the command:

sudo reboot


Log back into your desktop and open a terminal window.

Deploy Your First Container with DockSTARTer

Before we do this, let me say that DockSTARTer doesn’t include an exhaustive list of applications available for installation. There are quite a few but you might not find everything you need. Even so, this is still a great way to learn the ins and outs of Docker deployment.

From the terminal window, launch DockSTARTer with the command:

ds


In the first window (Figure 1), make sure Full Setup is selected (using your cursor keys if necessary) and then hit Enter on your keyboard.

Figure 1: The main DockSTARTer window.

In the next window (Figure 2), scan through the list of applications and select the one you want to deploy by highlighting it and hitting the space bar to select.

Figure 2: The DockSTARTer app selection window.

Once you’ve made your selection tab down to select OK and hit Enter on your keyboard. You will then be greeted by the first configuration window, which will vary, depending on the application you’ve selected to install. For example, when deploying Portainer with DockSTARTer, I am first presented with the configuration options for PORTAINER_NETWORK_MODE, PORTAINER_PORT, and PORTAINER_RESTART (Figure 3).

Figure 3: If these variables are okay, select Yes and hit Enter on your keyboard.

You will then be presented with the WATCHTOWER options window (Figure 4). Scan through these options and, if they’re okay, select Yes and hit Enter.

Figure 4: Configuring the WATCHTOWER options for a Portainer deployment.

The next screen (Figure 5) is for networking. Once again, if those are good, highlight Yes and hit Enter on your keyboard.

Figure 5: The network settings for my Portainer deployment.

Finally, if the Global settings are good (Figure 6), highlight Yes and hit Enter on your keyboard.

Figure 6: The Global settings for Portainer.

You will finally be presented with a prompt asking if you want to run compose now. Highlight Yes and hit Enter on your keyboard.

When compose completes, you should see something like this in the output:

[+] Building 0.0s (0/0)
[+] Running 2/2
 âœ” Container portainer   Started                                           0.5s
 âœ” Container watchtower  Started                                        0.3s


The container you selected has been deployed and is ready to access.

If you then want to view the Docker Compose file for the app you just deployed, change into the .docker directory with:

cd ~/.docker


From there, change to the compose directory with:

cd compose


You can then view the compose file with the command:

less docker-compose.yml


The best part about DockSTARTer is that it gives you a peek into how the Docker sausage is made. Without having to dive right into the deep end of containers, you can slowly make your way from the shallow end, learning how Docker manifests are crafted and what variables are necessary to make things work.

The post Get up to Speed with Containers Very Quickly with DockSTARTer appeared first on The New Stack.

]]>
Run OpenTelemetry on Docker https://thenewstack.io/run-opentelemetry-on-docker/ Tue, 20 Jun 2023 15:30:34 +0000 https://thenewstack.io/?p=22697186

The OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the “telemetry” — that fuel

The post Run OpenTelemetry on Docker appeared first on The New Stack.

]]>

The OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the “telemetry” — that fuel modern observability tools, and with minimal effort at integration time.

But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kubernetes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability?

The OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, container environments like Docker, etc.

At this juncture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to help combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard.

Learning Curve

Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc., the project’s creators say.

OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes.

The stated goals for the OpenTelemetry demo the project team communicated are:

  • Provide a realistic example of a distributed system that can be used to demonstrate OpenTelemetry instrumentation and observability.
  • Build a base for vendors, tooling authors, and others to extend and demonstrate their OpenTelemetry integrations.
  • Create a living example for OpenTelemetry contributors to use for testing new versions of the API, SDK, and other components or enhancements.

OpenTelemetry and Docker

In this tutorial, we look at how to run the OpenTelemetry demo in a Docker environment. Let’s get started.

The prerequisites are:

To note, if you are running Docker in Windows, you need to make sure that you have Admin privileges activated to deploy the OpenTelemetry demo in Microsoft PowerShell (yet another Windows aggravation).

We first clone the repo:

Navigate to the cloned folder:

Run Docker Compose (–no-build) and start the demo:

Head over to your Docker Desktop if you are on Windows and you should see the OpenTelemetry container ready to go in the dashboard:

Access the OpenTelemetry-Demo-Main and watch the Demo metrics data live:

And that is it. Now the fun can start!

Getting the Demo to run on Docker is, of course, just the beginning. There are loads of possibilities available to do more with the Demo that will likely be the subject of future tutorials.

This includes setting up the Astronomy Shop eCommerce demo application, which the maintainers of the project describe as an example of an application that a cloud native developer might be responsible for building, maintaining, etc.:

Several pre-built dashboards for the eCommerce application are available,  such as this one for  Grafana. It is used to track latency metrics from spans for each endpoint:

Feature Flags

Features flags, such as the recommendationCache feature flag, will initiate failures in the code that can be monitored with the panel using Grafana or Jaeger (Jaeger is used here):
Here is a list of access options once the images are built and the containers are running:

Once the images are built and containers are started you can access:

Long Way

This OpenTelemetry demo project has come a long way. While bugs can exist, of course, that is why GitHub is there in part so you can help this project become even better than it already is. The demo GitHub page also offers a number of resources to get started.

In a future tutorial, stay tuned for the steps to get the Astronomy Shop eCommerce demo application up and running and view all the fabulous metrics provided with OpenTelemetry with a Grafana panel.

The post Run OpenTelemetry on Docker appeared first on The New Stack.

]]>
Install and Use Podman Desktop GUI to Manage Containers https://thenewstack.io/install-and-use-podman-desktop-gui-to-manage-containers/ Sat, 17 Jun 2023 14:00:43 +0000 https://thenewstack.io/?p=22710472

For many, is the go-to desktop GUI for container management. That’s all fine and well if Docker is your runtime

The post Install and Use Podman Desktop GUI to Manage Containers appeared first on The New Stack.

]]>

For many, Docker Desktop is the go-to desktop GUI for container management. That’s all fine and well if Docker is your runtime of choice. But if you either use a distribution that doesn’t include Podman by default (such as Ubuntu) or a distribution that makes installing Docker a challenge (such as most of the Red Hat Enterprise Linux-based distributions), you might want to seek out an alternative.

For that, Red Hat has you covered.

That alternative is Podman Desktop. You can learn a bit more about Podman Desktop in this piece, but so long as you think of it as Podman’s answer to Docker Desktop, you’re already ahead of the curve. Red Hat released the latest version of the software in May.

Simply put, Podman Desktop simplifies the process of deploying and managing containers. If Podman is your container runtime engine of choice, you’re going to want to (at least) kick the tires of this well-designed desktop application.

I’m going to show you how to install Podman Desktop on Fedora Linux. The primary reason why I’ll demonstrate with that platform is that getting Podman running successfully on a Ubuntu-based system isn’t exactly for the faint of heart.

There are a lot of issues with Podman on non-RHEL systems and most admins don’t want to take the time to solve all of the associated issues. In fact, if a Ubuntu distribution is your Linux of choice, I would recommend sticking with Docker and Docker Desktop. Besides, Podman should come pre-installed on most RHEL-based distributions. And considering we’re installing Podman Desktop with Flatpak, you don’t have to also worry about installing flatpak (because it should be there by default).

Do note that Podman Desktop is also available for installation on macOS and Windows. The installation for those platforms is as simple as downloading the installer (for macOS, or Windows), double-clicking the downloaded file, and walking through the wizard.

The installation of Podman Desktop on Linux is done from the command line, so let’s get to it.

Installing Podman Desktop

It’s actually quite easy to install Podman Desktop. Log into your Linux desktop, open a terminal window, and issue the command:

flatpak install flathub io.podman_desktop.PodmanDesktop


Answer Y to the resulting questions and wait for the installation to complete.

If you receive an error that the application isn’t found, you might have to add the Flathub repository with the command:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo


Once the installation completes, log out of the desktop and log back in (so the Podman Desktop menu entry becomes available). You can then open your desktop menu, search for Podman Desktop, and click the launcher to open the app.

Using Podman Desktop

When you first open the app, you’ll find yourself on the Get Started page (Figure 1).

Figure 1: The Podman Desktop Get Started window allows you to deselect telemetry if you like.

Click Go to Podman Desktop at the bottom right corner of the window, where you then find yourself on the Dashboard (Figure 2), where you can see the extensions and view documentation.

Figure 2: The Podman Desktop Dashboard has plenty of information for you.

Click the Container icon (second from the top on the left sidebar). In the resulting window (Figure 3), click Create a container near the top right.

Figure 3: You can also run a container from the command line as instructed.

In the popup, click Existing image. In the next window (Figure 4), click Pull an image.

Figure 4: As you can see, I’ve already pulled the NGINX image.

You can also go directly to the Images section by clicking the cloud icon in the left sidebar (which takes you directly to the Pull Image window).

On the Pull Image window, type the name of the image you want to pull and click Pull image (Figure 5).

Figure 5: The Pull Image window allows you to pull any image from the default repositories.

Say, for instance, you want to pull the latest NGINX image. For that, type nginx:latest and then click Pull image.

Next, go back to the Containers section and click Create a container. When prompted, click Existing image and then click the right-pointing arrow for the NGINX image listing. This will open the container configuration window (Figure 6), where you can customize the container deployment (such as for columns, port mapping, environment variables, and more).

Figure 6: Configuring an NGINX container deployment.

For example, you might want to use a volume for the deployment. In the case of NGINX, add a path to a directory on your host in the left field, and then the NGINX container document root (/usr/share/nginx/html) in the right field.

Once you’ve configured the container, click Start container to deploy. You should then see the container listed as Running (Figure 7).

Figure 7: We’ve successfully deployed an NGINX container with Podman Desktop.

Of course, since you just deployed an NGINX container, you’d have to open the firewall to the port you mapped. For example, if you used external port 8080, you could open the firewall with the command:

sudo firewall-cmd --permanent --zone=public --add-port=8080/tcp


And that’s the gist of using the Podman Desktop application. If you prefer Podman to Docker, consider this an important tool for you to either get up to speed with the technology or a more efficient means of managing your container deployments.

I highly recommend you give this app a try and see if it doesn’t make using Podman considerably easier. You might even find it quickly becoming your go-to means of interacting with your Podman deployments.

The post Install and Use Podman Desktop GUI to Manage Containers appeared first on The New Stack.

]]>
A CTO’s Guide to Navigating the Cloud Native Ecosystem https://thenewstack.io/a-ctos-guide-to-navigating-the-cloud-native-ecosystem/ Tue, 13 Jun 2023 16:39:29 +0000 https://thenewstack.io/?p=22710615

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>

While container and cloud technology are increasingly mature, there are still a lot of different software, staffing and architecture considerations that CTOs must address to ensure that everything runs smoothly and operates together.

The Gartner “A CTO’s Guide to Navigating the Cloud Native Container Ecosystem” report estimates that by 2028, more than 95% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 50% in 2023.

This level of adoption means that organizations must have the right software to effectively manage, monitor and run container-based, cloud native environments. And there is a multitude of options for CTOs and enterprise architecture (EAs) leaders to sift through, which makes it hard to get environments level-set and to standardize processes.

“Despite the apparent progress and continued industry consolidation, the ecosystem remains fragmented and fast-paced. This makes it difficult for EAs andCTOs to build robust cloud native architectures and institute operational governance,” the authors state.

As container adoption expands for cloud native environments, more IT leaders will see an increase in both vendor and open source options. Such variety makes it harder to select the right tools to run a cloud native ecosystem and stretches out the evaluation process.

Here’s a look at container ecosystem components, software offerings and how CTOs can evaluate the best configuration for their organization.

What Are the Components of Container-Based Cloud Native Ecosystems?

Gartner explains that “containers are not a monolithic technology, the ecosystem is a hodgepodge of several components vital for production readiness.”

The foundation of a containerized ecosystem includes:

  • Container runtime lets developers deploy applications, configurations and other container image dependencies.
  • Container orchestrator supports features for policy-based deployment, application configuration management, high availability cluster establishment and container integration into overall infrastructure.
  • Container management software provides a management console, automation features, plus operational, security and developer tools. Vendors in this sector include Amazon Web Services (AWS), Microsoft, Google, RedHad, SUSE and VMware.
  • Open source tools and code: The Cloud Native Computing Foundation is the governance body that hosts several open source projects in this space.

These components all help any container-based applications run on cloud native architecture to support business functions and IT operations, such as DevOps, FinOps, observability, security and APIs. There are lots of open source projects that support all of these architectural components and platform engineering tools for Kubernetes.

At the start of cloud native ecosystem adoption, Gartner recommends:

Map your functional requirements to the container management platforms and identify any gaps that can be potentially filled by open source projects and commercial products outlined in this research for effective deployments.

Choose open source projects carefully, based on software release history, the permissiveness of software licensing terms and the vibrancy of the community, characterized by a broad ecosystem of vendors that provide commercial maintenance and support.

What Are the Container Management Platform Components?

Container management is an essential part of cloud native ecosystems; it should be top of mind during software selection and container environment implementation. But legacy application performance monitoring isn’t suited for newer cloud technology.

Cloud native container management platforms include the following tools:

  • Observability enables a skilled observer — a software developer or site reliability engineer — to effectively explain unexpected system behavior. Gartner mentions Chronosphere for this cloud native container management platform.
  • Networking manages communication inside the communication pod, between cluster containers and from the outside world.
  • Storage delivers granular data services, high availability and performance for stateful applications with deep integration with the container management systems.
  • Ingress control gatekeeps network communications of a container orchestration cluster. All inbound traffic to services inside the cluster must pass through the ingress gateway.
  • Security and compliance provides assessment of risk/trust of container content, secrets management and Kubernetes configurations. It also extends into production with runtime container threat protection and access control.
  • Policy-based management lets IT organizations programmatically express IT requirements, which is critical for container-based environments. Organizations can use the automation toolchain to enforce these policies.

More specific container monitoring platform components and methodologies include Infrastructure as Code, CI/CD, API gateways, service meshes and registries.

How to Effectively Evaluate Software for Cloud Native Ecosystems

There are two types of container platforms that bring all required components together: integrated cloud infrastructure and platform services (CIPS) and software for the cloud.

Hyperscale cloud providers offer integrated CIPS capabilities that allow users to develop and operate cloud native applications with a unified environment. Almost all of these providers can deliver an effective experience within their platforms, including some use cases of hybrid cloud and edge. Key cloud providers include Alibaba Cloud, AWS, Google Cloud, Microsoft Azure, Oracle Cloud, IBM Cloud and Tencent.

Vendors in this category offer on-premises, edge solutions and may offer either marketplace or managed services offerings in multiple public cloud environments. Key software vendors include Red Hat, VMware, SUSE (Rancher), Mirantis, HashiCorp (Nomad), etc.

Authors note critical factors of platform provider selection include:

  • Automated, secure, and distributed operations
    • Hybrid and multicloud
    • Edge optimization
    • Support for bare metal
    • Serverless containers
    • Security and compliance
  • Application modernization
    • Developer inner and outer loop tools
    • Service mesh support
  • Open-source commitment
  • Pricing

IT leaders can figure out which provider has the most ideal offering if they match software to their infrastructure (current and future), security protocols, budget requirements, application modernization toolkit and open source integrations.

Gartner recommends that organizations:

Strive to standardize on a consistent platform, to the extent possible across use cases, to enhance architectural consistency, democratize operational know-how, simplify developer workflow and provide sourcing advantages.

Create a weighted decision matrix by considering the factors outlined above to ensure an objective decision is made.

Prioritize developers’ needs and their inherent expectations of operational simplicity, because any decision that fails to prioritize the needs of developers is bound to fail.

Read the full report to learn about ways to effectively navigate cloud native ecosystems.

The post A CTO’s Guide to Navigating the Cloud Native Ecosystem appeared first on The New Stack.

]]>
Deploy a Kubernetes Development Environment with Kind https://thenewstack.io/deploy-a-kubernetes-development-environment-with-kind/ Sat, 10 Jun 2023 14:00:45 +0000 https://thenewstack.io/?p=22709234

Let me set the stage: You’re just starting your journey into Kubernetes and you’re thrilled at the idea of developing

The post Deploy a Kubernetes Development Environment with Kind appeared first on The New Stack.

]]>

Let me set the stage: You’re just starting your journey into Kubernetes and you’re thrilled at the idea of developing your first application or service. Your first step is to deploy a Kubernetes cluster so you can start building but almost immediately realize how challenging a task that is.

All you wanted to do was take those first steps into the world of container development but actually getting Kubernetes up and running in a decent amount of time has proven to be a bit of a challenge.

Would that there was something a bit kinder.

There is and it’s called kind.

From the official kind website: kind is a tool for running local Kubernetes clusters using Docker container “nodes.” kind was primarily designed for testing Kubernetes itself but may be used for local development or continuous integration.

Kind is one of the easiest ways of starting out with Kubernetes development, especially if you’re just beginning your work with containers. In just a few minutes you can get kind installed and running, ready for work.

Let me show you how it’s done.

What You’ll Need

You can install kind on Linux, macOS, and Windows. I’ll demonstrate how to install kind on all three platforms. Before you install kind on your operating system of choice, you will need to have both Docker and Go installed. I’ll demonstrate it on Ubuntu Server 22.04. If you use a different Linux distribution, you’ll need to alter the installation steps accordingly.

Installing Docker

The first thing to do is install Docker. Here’s how on Each OS.

Linux

Log into your Ubuntu instance and access a terminal window. Add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg &amp;&amp;

| sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Install the necessary dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release git -y


Update apt:

sudo apt-get update


Install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

MacOS/Windows

The easiest method of installing Docker on macOS and Windows is by way of Docker Desktop. You can download the installers for macOS Intel, macOS Apple Silicon, or Windows, double-click the files, and walk through the installation wizards.

Installing Go

Next, install Go. Here’s how.

Ubuntu Linux

To install Go on Ubuntu, open a terminal window and issue the command:

sudo apt-get install golang-go -y

MacOS/Windows

To install Go on macOS or Windows, simply download and run the installer file which can be found for macOS Intel, macOS Apple Silicon, and Windows.

Installing kind

Now, we can install kind. Here’s how for each platform.

Linux

Download the binary file with the command:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64


Give the file the necessary permissions with:

chmod +x kind


Move it to /usr/bin with:

sudo mv kind /usr/bin/

MacOS

Open the terminal application. For macOS Intel, download kind with:

[ $(uname -m) = x86_64 ]&amp;&amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-darwin-amd64


For Apple Silicon, issue the command:

[ $(uname -m) = arm64 ] &amp;&amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-darwin-arm64


Give the file executable permissions with:

chmod +x kind


Move kind so that it can be run globally with the command:

mv ./kind /usr/local/bin/kind

Windows

Open the terminal window app. Download kind with:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64


Move the executable file to the directory of your choice with the command:

Move-Item .\kind-windows-amd64.exe c:\DIRECTORY\kind.exe


Where DIRECTORY is the name of the directory to house kind.

Create a Dev Environment

It’s now time to deploy your first Kubernetes cluster with kind. Let’s create one called tns-test with the command:

kind create cluster --name=tns-test


You should see the following output in the terminal window:

✓ Ensuring node image (kindest/node:v1.24.0) 🖼

✓ Preparing nodes 📦

✓ Writing configuration 📜

✓ Starting control-plane 🕹️

✓ Installing CNI 🔌

✓ Installing StorageClass 💾

Once the output completes, you’re ready to go. One thing to keep in mind, however, is that the command only deploys a single node cluster. Say you have to start developing on a multinode cluster. How do you pull that off? First, you would need to delete the single node cluster with the command:

kind delete cluster --name=tns-test


Next, you must create a YML file that contains the information for the nodes. Do this with the command:

nano kindnodes.yml


In that file, paste the following contents:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker


Save and close the file. You can then deploy with the command:

kind create cluster --name=tns-multi-test --config=kindnodes.yml


To verify your cluster is running, issue the command:

kind get clusters


You should see tns-multi-test in the output.

If you want to interact with kubectl, you first must install it. On Ubuntu, that’s as simple as issuing the command:

sudo snap install kubectl --classic


Once kubectl is installed, you can check the cluster info with a command like this:

kubectl cluster-info --context kind-tns-multi-test


You should see something like this in the output:

Kubernetes control plane is running at https://127.0.0.1:45465
CoreDNS is running at https://127.0.0.1:45465/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy


To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.

You can now start developing on a multinode Kubernetes cluster, with full use of the kubectl command.

The post Deploy a Kubernetes Development Environment with Kind appeared first on The New Stack.

]]>
Chainguard Improves Security for Its Container Image Registry https://thenewstack.io/chainguard-improves-security-for-its-container-image-registry/ Wed, 31 May 2023 13:30:49 +0000 https://thenewstack.io/?p=22709510

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>

A year ago, Chainguard released Chainguard Images. These are container base images designed for a secure software supply chain. They do this by providing developers and users with continuously updated base container images with zero-known vulnerabilities. That’s all well and good, but now the well-regarded software developer security company has also upgraded how it hosts and distributes its Images to improve security.

Before this, Chainguard distributed its images using a slim wrapper over GitHub’s Container Registry. The arrangement allowed the company to focus on its tools and systems, enabling flexible adjustments to image distribution.

However, as the product gained traction and scaling became necessary, Chainguard ran into limitations. So, the business reevaluated its image distribution process and created its own registry. Leveraging the company’s engineering team’s expertise in managing hyperscale registries, Chainguard has built the first passwordless container registry, focusing on security, efficiency, flexibility and cost-effectiveness.

How It Works

Here’s how it works. For starters, for Identity and Access Management (IAM), Chainguard relies on short-lived OpenID Connect (OIDC) instead of conventional username-password combinations. OIDC is an identity layer built on top of the OAuth 2.0 framework credentials. To ensure the registry is only accessible to authorized Chain Guard personnel, only the GitHub Actions workflow identity can push to the public Chainguard registry repository. This promotes a secure, auditable and accountable process for making changes to the repository.

On the user side, when pulling images, you can authenticate with a credential helper built into Chainguard’s chainctl CLI. This also relies on OIDC for authentication. With this approach, there are no long-lived tokens stored on the user’s computer. Both chainctl and the credential helper are aware of common OIDC-enabled execution environments such as GitHub Actions. With this, customers can also limit who and how images can be pulled.

If your environment doesn’t support OIDC, the registry also offers long-lived authentication options. For the sake of your own security, I urge you to move to an OIDC-compliant process.

For now, existing Chainguard Images customers cannot push directly to the registry. It can only currently be used to Chainguard created and managed host Images.

As part of the Chainguard Enforce software supply chain control plane platform, the new Chainguard Registry supports CloudEvents to notify users of significant activities with their images. Customers can create subscriptions and receive event notifications for image pushes and pulls, including failures. They can leverage these events to initiate base image updates, conduct vulnerability scans, duplicate pushed images or audit system activities.

Cloudflare R2

Chainguard’s done this by building its own container image registry on Cloudflare R2. With this new method, the company has far greater control and has cut back considerably on its costs.

Why Cloudflare R2? Simple. It’s all about egress fees — the cloud provider charges for external data transfer. Chainguard opted for Cloudflare R2 for image blob distribution. Because it offers zero egress-fee hosting and a fast, globally trusted distribution network, promising a sustainable model for hosting free public images without excessive costs or rate limitations.

This is a huge deal. As Jason Hall, a Chainguard software engineer, explained, “The 800-pound gorilla in the room of container image registry operators is egress fees. … Image registries move a lot of bits to a lot of users all over the world, and moving those bits can become very expensive, very quickly. In fact, just paying to move image bits is often the main cost of operating an image registry. For example, Docker’s official Nginx image has been pulled over a billion times, about 31 million times in the last week alone. The image is about 55 megabytes, so that’s 1.7 PB of egress. At S3’s standard egress pricing of $0.05/GB, that’s $85,000, to serve just the nginx image, for just one week.”

To pay for this, companies that host registries have had to pay cloud providers for hosting. You end up paying for it as the image providers pass the costs along to you with paid plans or up-priced services

Chainguard thinks Cloudflare R2 “fundamentally changes the story for image hosting providers and makes this a sustainable model for hosting free public images without imposing onerous costs or rate limits.” I think Cloudflare needs to pay its bills too, and eventually, there will be a charge for the service.

For now, though, Chainguard can save money and re-invest in further securing images. This sounds like a win to me. You can try Chainguard Images today to see if their security-first images work for you.

The post Chainguard Improves Security for Its Container Image Registry appeared first on The New Stack.

]]>
How to Protect Containerized Workloads at Runtime https://thenewstack.io/how-to-protect-containerized-workloads-at-runtime/ Tue, 30 May 2023 11:00:22 +0000 https://thenewstack.io/?p=22709118

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach —

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>

Security is (finally) getting its due in the enterprise. Witness trends such as DevSecOps and the “shift left” approach — meaning to move security as early as possible into development pipelines. But the work is never finished.

Shift left and similar strategies are generally good things. They begin to address a long-overdue problem of treating security as a checkbox or a final step before deployment. But in many cases is still not quite enough for the realities of running modern software applications. The shift left approach might only cover the build and deploy phases, for example, but not apply enough security focus to another critical phase for today’s workloads: runtime.

Runtime security “is about securing the environment in which an application is running and the application itself when the code is being executed,” said Yugal Joshi, partner at the technology research firm Everest Group.

The emerging class of tools and practices for security aim to address three essential security challenges in the age of containerized workloads, Kubernetes, and heavily automated CI/CD pipelines, according to Utpal Bhatt, CMO at Tigera, a security platform company.

First, the speed and automation intrinsic to modern software development pipelines create more threat vectors and opportunities for vulnerabilities to enter a codebase.

Second, the orchestration layer itself, like Kubernetes, also heavily automates the deployment of container images and introduces new risks.

Third, the dynamic nature of running container-based workloads, especially when those workloads are decomposed into hundreds or thousands of microservices that might be talking to one another, creates a very large and ever-changing attack surface.

“The threat vectors increase with these types of applications,” Bhatt told The New Stack. “It’s virtually impossible to eliminate these threats when focusing on just one part of your supply chain.”

Runtime Security: Prevention First

Runtime security might sound like a super-specific requirement or approach, but Bhatt and other experts note that, done right, holistic approaches to runtime security can bolster the security posture of the entire environment and organization.

The overarching need for strong runtime security is to shift from a defensive or detection-focused approach to a prevention-focused approach.

“Given the large attack surface of containerized workloads, it’s impossible to scale a detection-centric approach to security,” said Mikheil Kardenakhishvili, CEO and co-founder of Techseed, one of Tigera’s partners. “Instead, focusing on prevention will help to reduce attacks and subsequently the burden on security teams.”

Instead of a purely detection-based approach, one that often burns out security teams and puts them in the position of being seen as bottlenecks or inhibitors by the rest of the business, the best runtime security tools and practices, according to Bhatt, implement a prevention-first approach backed by traditional detection response.

“Runtime security done right means you’re blocking known attacks rather than waiting for them to happen,” Bhatt said.

Runtime security can provide common services as a platform offering that any application can use for secure execution, noted Joshi, the Everest Group analyst.

“Therefore, things like identity, monitoring, logging, permissions, and control will fall under this runtime security remit,” he said. “In general, it should also provide an incident-response mechanism through prioritization of vulnerability based on criticality and frequency. Runtime security should also ideally secure the environment, storage, network and related libraries that the application needs to use to run.”

A SaaS Solution for Runtime Security

Put in more colloquial terms: Runtime security means securing all of the things commonly found in modern software applications and environments.

The prevention-first, holistic approach is part of the DNA of Calico Open Source, an open source networking and network security project for containers, virtual machines, and native host-based workloads, as well as Calico Cloud and Calico Enterprise, the latter of which is Tigera’s commercial platform built on the open source project it created.

Calico Cloud, a Software as a service (SaaS) solution focused on cloud native apps running in containers with Kubernetes, offers security posture management, robust runtime security for identifying known threats, and threat-hunting capabilities for discovering Zero Day attacks and other previously unknown threats.

These four components of Calico — securing your posture in a Kubernetes-centric way, protecting your environment from known attackers, detecting Zero Day attacks, and incident response/risk mitigation — also speak to four fundamentals for any high-performing runtime security program, according to Bhatt.

Following are the four principles to follow for protecting your runtime.

4 Keys to Doing Runtime Security Right

1. Protect your applications from known threats. This is core to the prevention-first mindset, and focuses on ingesting reliable threat feeds that your tool(s) continuously check against — not just during build and deploy but during runtime as well.
Examples of popular, industry-standards feeds include network addresses of known malicious servers, process file hashes of known malware, and the OWASP Top 10 project.

2. Protect your workloads from vulnerabilities in the containers. In addition to checking against known, active attack methods, runtime security to proactively protect against vulnerabilities in the container itself — and everything that the container needs to run, including the environment.

This isn’t a “check once” type of test, but a virtuous feedback loop that should include enabling security policies that protect workloads from any vulnerabilities, including limiting communication or traffic between services that aren’t known/trusted or when a risk is detected.

3. Detect and protect against container and network anomalous behaviors. This is “the glamorous part” of runtime security, according to Bhatt, because it enables security teams to find and mitigate suspicious behavior in the environment even when it’s not associated with a known threat, such as with Zero Day attacks.

Runtime security tools should be able to detect anomalous behavior in container or network activity and alert security operations teams (via integration with security information and event management, or SIEM, tools) to investigate and mitigate as needed.

4. Assume breaches have occurred; be ready with incident response and risk mitigation. Lastly, even while shifting to a prevention-first, detection-second approach, Bhatt said runtime security done right requires a fundamental assumption that your runtime has already been compromised (and will occur again). This means your organization is ready to act quickly in the event of an incident and minimize the potential fallout in the process.

Zero trust is also considered a best strategy for runtime security tools and policies, according to Bhatt.

The bottom line: The perimeter-centric, detect-and-defend mindset is no longer enough, even if some of its practices are still plenty valid. As Bhatt told The New Stack: “The world of containers and Kubernetes requires a different kind of security posture.”

Runtime security tools and practices exist to address the much larger and more dynamic threat surface created by containerized environments. Bhatt loosely compared today’s software environments to large houses with lots of doors and windows. Legacy security approaches might only focus on the front and back door. Runtime security attempts to protect the whole house.

Bhatt finished the metaphor: “Would you rather have 10 locks on one door, or one lock on every door?”

The post How to Protect Containerized Workloads at Runtime appeared first on The New Stack.

]]>
How to Containerize a Python Application with Paketo Buildpacks https://thenewstack.io/how-to-containerize-a-python-application-with-packeto-buildpacks/ Mon, 29 May 2023 12:00:03 +0000 https://thenewstack.io/?p=22709274

Containers have been in use for almost a decade, but containerizing applications can still pose challenges. More specifically, Dockerfiles —

The post How to Containerize a Python Application with Paketo Buildpacks appeared first on The New Stack.

]]>

Containers have been in use for almost a decade, but containerizing applications can still pose challenges. More specifically, Dockerfiles — which dictate how container images are built — can be challenging to write properly. Even simple Dockerfiles can be problematic. A study found that nearly 84% of the projects they analyzed had smells — which are quality problems — in their Dockerfile.

In this article, I will demonstrate an alternative method to Dockerfiles for containerizing an application, following best practices, with just a single command. Before demonstrating this technique, let’s first look at the difficulties associated with containerizing applications using traditional approaches.

Great Dockerfiles Are Hard to Write

What’s so hard about Dockerfiles?

  1. It’s a craft: writing good Dockerfiles requires deep knowledge and experience. There are a number of best practices that must be implemented for every Dockerfile. Developers — who are generally the ones writing them — might not have the knowledge or resources to do it right.

  2. Security: they can be a security threat if not well written. For example, a common issue with Dockerfiles is that they often use the root user in their instructions, which can create security vulnerabilities and allow an attacker to gain full control over the host system.

  3. They are not natively fast: getting fast build time needs work, from ensuring that you use minimal base images, minimize the number of layers, use build caching and set up a multistage build.

Learning how to create the perfect Dockerfile can be enjoyable when working with one or two images. However, the excitement wanes as the number of images increases, requiring management across multiple repositories, projects, and stacks, as well as constant maintenance. This is where the open source project Paketo Buildpacks offers a solution.

An Easier Way

Before diving into the tutorial, let’s discuss the concept behind Buildpacks, an open source project maintained by the Cloud Native Computing Foundation.

Developed by Heroku, Buildpacks transform application source code into images that can run on any cloud platform. They analyze the code, identify what is needed to build and run the software, and then assemble all components into an image. By examining applications, Buildpacks determine the necessary dependencies and configure them in a series of layers, ultimately creating a container image. Buildpacks also feature optimization mechanisms to reduce build time.

While the Cloud Native Buildpacks project offers a specification for Buildpacks, it doesn’t supply ready-to-use Buildpacks; that’s what Paketo Buildpacks provide. This community-driven project develops production-ready Buildpacks.

Paketo Buildpacks adhere to best practices for each language ecosystem, currently supporting Java, Go, Node.js, .NET, Python, and PHP, among others. The community constantly addresses vulnerabilities in upstream language runtimes and operating system packages, saving you the effort of monitoring for susceptible dependencies.

Let’s Containerize a Python Application

There are two requirements to use this tutorial:

  1. Have Docker Desktop installed; here is a guide to install it.

  2. Have pack CLI installed; here is a guide to install it.

In this example, we will use a Python application. I provide a sample app for the sake of testing but feel free to use your own.

Once you are in the application root directory, run the command:

pack build my-python-app --builder paketobuildpacks/builder:base

That’s the only command you need to create an image! Now you can run it as you would usually do.

docker run -ti -p 5000:8000 -e PORT=8000 my-python-app

Now let’s check that the app is working properly by running this command in another terminal:

$ curl 0:5000

Hello, TheNewStack readers!

$

You can continue developing your application, and whenever you need a new image, simply run the same pack build command. The initial run of the command might take some time, as it needs to download the paketobuildpacks/builder:base image. However, subsequent iterations will be much faster, thanks to advanced caching features implemented by buildpack authors.

Other Benefits of Using Paketo Buildpacks?

With increasing security standards, numerous engineering organizations have started to depend on SBOMs (software bill of materials) to mitigate the risk of vulnerabilities in their infrastructure. Buildpacks offer a straightforward approach to gaining insights into the contents of images through standard build-time SBOMs, which Buildpack can generate in CycloneDX, SPDX, and Syft JSON formats.

You can try it on your image by using the following command:

pack sbom download my-python-app

Another benefit of using Paketo Buildpacks is that you will be using minimal images that contain only what is necessary. For example, while my image based on  paketobuildpacks/builder:base was only 295MB, a bare python:3 Docker image is already 933MB.

Conclusion

Although Dockerfiles have served us well, they are not the most efficient use of a developer’s time. The need to manage and maintain Dockerfiles can become significant, especially with the rise of microservices and distributed architecture. By using Paketo Buildpacks, developers can build better images faster, giving them more time to focus on what adds more value to their projects. And the best part? While we used Python in this article, the same principle can be applied to any project with any supported stack.

The post How to Containerize a Python Application with Paketo Buildpacks appeared first on The New Stack.

]]>
Can Rancher Deliver on Making Kubernetes Easy? https://thenewstack.io/can-rancher-deliver-on-making-kubernetes-easy/ Sat, 27 May 2023 14:00:18 +0000 https://thenewstack.io/?p=22708481

Over the past few years, Kubernetes has become increasingly difficult to deploy. When you couple that with the idea that

The post Can Rancher Deliver on Making Kubernetes Easy? appeared first on The New Stack.

]]>

Over the past few years, Kubernetes has become increasingly difficult to deploy. When you couple that with the idea that Kubernetes itself can be a challenge to learn, you have the makings of a system that could have everyone jumping ship for the likes of Docker Swarm.

I’m always on the lookout for easier methods of deploying Kubernetes for development purposes. I’ve talked extensively about Portainer (which I still believe is the best UI for container management) and have covered other Kubernetes tools, such as another favorite, MicroK8s.

Recently, I’ve started exploring Rancher, a tool that hasn’t (for whatever reason) been on my radar to this point. The time for ignoring the tool is over and my initial experience so far has been, shall I say, disappointing. One would expect a tool with a solid reputation for interacting with Kubernetes to be easy to deploy and use. After all, the official Rancher website makes it clear it is “Kubernetes made simple.” But does it follow through with that promise?

Not exactly.

Let me explain by way of walking you through the installation and the first steps of both Rancher on a server and the Rancher Desktop app.

One thing to keep in mind is that this piece is a work in progress and this is my initial experience with the tool. I will continue my exploration with Rancher as I learn more about the system. But this initial piece was undertaken after reading the official documentation and, as a result, made a few discoveries in the process. I will discuss those discoveries (and the results from them) in my next post.

I’m going to show you how I attempted to deploy Rancher on Ubuntu Server 22.04.

Installing Rancher on Ubuntu Server 22.04

Before you dive into this, there’s one very important thing you need to know. Installing Rancher this way does not automatically give you a Kubernetes cluster. In fact, you actually need a Kubernetes cluster already running. This is only a web-based GUI. And even then, it can be problematic.

The first step to installing Rancher on Ubuntu Server is to log into your Ubuntu server instance. That server must have a regular user configured with sudo privileges and a minimum of 2 CPU Core and 4 GB RAM.

Once you’ve logged in, you must first install a few dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release -y


Next, add the necessary GPG key with:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Add the official Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] &amp;&amp;
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 
| sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Update apt with:

sudo apt-get update


Install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Add your user to the docker group with the command:

sudo usermod -aG docker $USER


Finally, log out and log back in for the changes to take effect.

Deploy Rancher

Now that Docker is installed, you can deploy Rancher with:

docker run -d --name=rancher-server --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.4.18


An older version of Rancher must be used because the latest version fails to start.

The deployment will take some time to complete. When it does open a web browser and point it to http://SERVER (where SERVER is the IP address of the hosting server). You’ll be greeted by the welcome screen, where you must set a password for the admin user (Figure 1).

Figure 1: Setting a password for the default Rancher admin user.

In the next window (Figure 2), you must set the Rancher Server URL. If you’ll be using an IP address, leave it as is. If you’ll use a domain, change the entry and click Save URL.

Figure 2: Setting the URL for the Rancher server.

You will then be prompted to add a cluster (Figure 3). If your cluster is in-house, select “From existing nodes”. If you’ll be using a cluster from a third party, select the service.

Figure 3: Selecting the cluster type for your deployment.

In the resulting window (Figure 4), fill out the necessary details and configure the cluster as needed. At the bottom of the window, click Next.

Figure 4: The Custom cluster configuration window.

You will then be given a command to run on your Kubernetes cluster (Figure 5).

Figure 5: The command must be run on a supported version of Docker (I used the latest version of Docker CE).

After the command completes on the Kubernetes server, click Done.

At this point, the cluster should register with Rancher. “Should” being the operative term. Unfortunately, even though my Kubernetes cluster was running properly, the registration never succeeded. Even though the new node was listed in the Nodes section, the registration hadn’t been completed after twenty minutes. This could be because my Kubernetes cluster is currently being pushed to its limits. Because of that, I rebooted every machine in the cluster and tried again.

No luck.

My guess is the problem with my setup is the Kubernetes cluster was deployed with MicroK8s and Rancher doesn’t play well with that system. Although you can deploy Rancher with MicroK8s, Helm, and a few other tools, that process is quite challenging.

I decided to bypass deploying Rancher on Ubuntu Server and went straight to Rancher Desktop. After all, Rancher Desktop is supposed to be similar to Docker Desktop, only with a Kubernetes backend.

Here’s the process of installing Rancher Desktop on Pop!_OS Linux:

  1. First, check to make sure you have kvm privileges with the command [ -r /dev/kvm ] && [ -w /dev/kvm ] || echo ‘insufficient privileges’
  2. Generate a GPG key with gpg –generate-key
  3. Copy your GPG key and add it to the command pass init KEY (where KEY is your GPG key)
  4. Allow Traefik to listen on port 80 with sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
  5. Add the Rancher GPG key with the command curl -s https://download.opensuse.org/repositories/isv:/Rancher:/stable/deb/Release.key | gpg –dearmor | sudo dd status=none of=/usr/share/keyrings/isv-rancher-stable-archive-keyring.gpg
  6. Add the official Rancher repository with echo ‘deb [signed-by=/usr/share/keyrings/isv-rancher-stable-archive-keyring.gpg] https://download.opensuse.org/repositories/isv:/Rancher:/stable/deb/ ./’ | sudo dd status=none of=/etc/apt/sources.list.d/isv-rancher-stable.list
  7. Update apt with the command sudo apt update
  8. Install Rancher Desktop with sudo apt install rancher-desktop -y

Launch Rancher Desktop from your desktop menu and accept the default PATH configuration (Figure 6).

Figure 6: The only configuration option you need to set for Rancher Desktop.

Rancher Desktop will then download and start the necessary software to run. Once that completes, you’ll find yourself on the Welcome to Rancher Desktop window (Figure 7).

Figure 7: The main Rancher Desktop window.

Here’s where things take a turn for the confusing. With Rancher Desktop, the only things you can actually do are manage port forwarding, pull and build images, scan images for vulnerabilities (which is a very handy feature), and troubleshot. What you cannot do is deploy containers.

To do that, you have to revert to the command line using the nerdctl command which, oddly enough, isn’t installed along with Rancher Desktop on Linux. I did run a test by installing Rancher Desktop on macOS and found that nerdctl was successfully installed, leading me to believe this is a Linux issue. Another thing to keep in mind is that the macOS installation of Rancher Desktop is considerably easier. However, it suffers from the same usability issues as it does on Linux.

If you’d like to keep experimenting with Rancher Desktop, you’ll need to get up to speed with nerdctl which I demonstrated here.

You can also build an image with Rancher Desktop, by clicking Images > Add Image and then clicking the Build tab. Give your image a name and click Build. You then must select a build directory. What it doesn’t tell you is that the build directory must contain a proper Docker file. With the Docker file in place, the image will build.

Maybe the GUI should key users in on that fact.

Once the image is built, you should be good to go to deploy a container based on that image. Right? Not within Rancher Desktop you can’t. Instead, you have to go back to the terminal window and deploy the container with the nerdctl command.

How is any of this Kubernetes made simple? It’s not. If you want Kubernetes made simple, you go with the MicroK8s/Portainer combo and call it a day.

From my perspective, if you’re going to claim that your product makes Kubernetes simple (which is a big promise, to begin with), you shouldn’t require users to jump through so many hoops to reach a point where they can successfully work with the container management platform. Simple is a word too many companies use these days but fail to deliver on.

The post Can Rancher Deliver on Making Kubernetes Easy? appeared first on The New Stack.

]]>
Red Hat Podman Container Engine Gets a Desktop Interface https://thenewstack.io/red-hat-podman-container-engine-gets-a-desktop-interface/ Tue, 23 May 2023 14:30:34 +0000 https://thenewstack.io/?p=22708811

Red Hat’s open source Podman container engine now has a full-fledged desktop interface. With a visual user interface replacing Podman’s

The post Red Hat Podman Container Engine Gets a Desktop Interface appeared first on The New Stack.

]]>

Red Hat’s open source Podman container engine now has a full-fledged desktop interface.

With a visual user interface replacing Podman’s command lines, the open source enterprise software company wants to attract developers new to the containerization space, as well as small businesses that wish to test the waters for running their applications on Kubernetes, particularly of the OpenShift variety.

The desktop “simplifies the creation, management, and deployment of containers, while abstracting the underlying configuration, making it a lightweight, efficient alternative for container management, reducing the administrative overhead,” promised Mithun Dhar, Red Hat vice president and general manager for developer tools and programs, in a blog post.

Podman, short for Pod Manager, is a command line tool for managing containers in a Linux environment, executing tasks such as inspecting and running containers, building and pulling images.

In its own Linux distributions, Red Hat offers Podman in lieu of the Docker container engine for running containers. Docker also has a desktop interface for its own container engine, so time will tell how Red Hat’s desktop interface will compare. The Red Hat desktop can work not only with the Podman container engine itself but also with Docker and Lima, a container engine for Mac.

The Podman Desktop 1.0 offers a visual environment for all of these tasks supported by Podman itself. From the comfort of a graphical user interface, devs can build images, pull images from registries, push images to OCI registries, start and stop containers, inspect logs, start terminal sessions from within the containers, and test and deploy their images on Kubernetes. It also offers widgets to monitor the usage of the app itself.

It’s very Kubernetes-friendly. Kind, a tool for running Kubernetes multi-node clusters locally, provides an environment for creating and testing applications. Developers can work directly with Kubernetes Objects through Podman.

The Podman desktop can be installed on Windows, Linux or Mac.

OpenShift Connects

OpenShift is Red Hat’s enterprise Kubernetes platform, and so not surprisingly, Red Hat is using a Podman as a ramp-up point for the OpenShift.

The desktop, Dhar wrote, is integrated with Red Hat OpenShift Local, which provides a way to test applications in a production-equivalent environment.

Podman Desktop is also connected to Developer Sandbox for Red Hat OpenShift, a free cloud-based OpenShift hosting service. This could provide an organization to test its applications in a Kubernetes environment.

Red Hat released the desktop software during its Red Hat Summit, being held this week in Boston.

Other Red Hat news this week from the Summit:

Red Hat paid for this reporter’s travel and lodging to attend the Red Hat Summit.

The post Red Hat Podman Container Engine Gets a Desktop Interface appeared first on The New Stack.

]]>
Scan Container Images for Vulnerabilities with Docker Scout https://thenewstack.io/scan-container-images-for-vulnerabilities-with-docker-scout/ Sat, 20 May 2023 13:00:55 +0000 https://thenewstack.io/?p=22707932

The security of your containers builds on a foundation formed from the images you use. If you work with an

The post Scan Container Images for Vulnerabilities with Docker Scout appeared first on The New Stack.

]]>

The security of your containers builds on a foundation formed from the images you use. If you work with an image rife with vulnerabilities, your containers will be vulnerable. On the contrary, if you build your containers on a solid foundation of secure images, those containers will be more secure by default (so long as you follow standard best practices).

Every container developer who’s spent long enough with the likes of Docker and Kubernetes understands this idea. The issue is putting it into practice. Fortunately, there are plenty of tools available for scanning images for vulnerabilities. One such tool is Docker Scout, which was released in early preview with Docker Desktop 4.17. The tool can be used either from the Docker Desktop GUI or the command line interface and offers insights into the contents of a container image.

What sets Docker Scout apart from some of the other offerings is that it not only will display CVEs but also the composition of the image (such as base image and update recommendations). In other words, anyone who depends on Docker should consider Scout a must-use.

I’m going to show you how to use Docker Scout from both the Docker Desktop GUI and the Docker command line interface.

What You’ll Need

To use Docker Scout, you’ll need Docker Desktop installed, which is available for Linux, macOS, and Windows. When you install Docker Desktop it will also install the Docker CLI tool. If you prefer the command line, I’ll first show you how to install the latest version of Docker CE (Community Edition). You’ll also need a user with sudo (or admin) privileges.

How to Install Docker CE

The first thing we’ll do is install Docker CE. I’ll demonstrate on Ubuntu Server 22.04, so if you use a different Linux distribution, you’ll need to alter the installation commands as needed.

If you’ve already installed Docker or Docker Desktop, you can skip these steps.

First, add the official Docker GPG key with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Next, add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null


Install the required dependencies with the command:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release git -y


Update apt with:

sudo apt-get update


Finally, we can install the latest version of the Docker CE runtime engine:

sudo apt-get install docker-ce docker-ce-cli containerd.io -y


Next, you must add your user to the docker group with the command:

sudo usermod -aG docker $USER


Log out and log back in for the changes to take effect.

How to use Docker Scout from Docker Desktop

The first method I’ll demonstrate is via the Docker Desktop GUI. When you open Docker Desktop, you should see Docker Scout listed in the left navigation. Do take note the feature is currently in early access. Once early access closes, you’ll need either a Docker Pro, Team, or Business subscription to use the feature. Until then, however, the feature is free to use on Docker Desktop.

Click Docker Scout and you’ll see the Analyze Image button and a drop-down where you can select the image you want to scan. If you don’t see the image you want to scan in the drop-down, you’ll need to pull it by typing the image name in the Search field at the top of the Docker Desktop window, click the Images tab in the resulting popup, and then click Pull (Figure 1).

Figure 1: Pulling the official NGINX image with Docker Desktop.

Figure 1: Pulling the official NGINX image with Docker Desktop.

Once the image is pulled, go back to Docker Scout, select the image from the drop-down, and click Analyze Image (Figure 2).

Figure 2: Analyzing the latest NGINX image.

Depending on the size of the image, the analysis shouldn’t take too much time. When it completes, it will report back what it finds. For example, with the nginx:latest image, it found zero vulnerabilities or other issues (Figure 3).

Figure 3: The nginx:latest image is clean.

On the other hand, a quick scan of the Rocky Linux minimal image comes up with 16 vulnerabilities, all of which are marked as High. After that scan, click View Packages and CVEs to reveal the detailed results. You can expand each entry to view even more results (Figure 4).

Figure 4: Click Fixable Packages to see what packages have issues you can easily mitigate.

How to Run Docker Scout from the CLI

If you prefer the command line, Docker Scout has you covered. Let’s examine the NGINX image. There are four main commands you can use with Docker Scout CLI, which are:

  • docker scout compare – Compares two images and displays the differences.
  • docker scout cves – Displays the CVEs identified for any software artifacts in the image.
  • docker scout quickview – Displays a quick overview of an image.
  • docker scout recommendations – Displays all available base image updates and remediation recommendations.

Let’s run the quickview command on the latest NGINX image. That command looks like this:

docker scout quickview nginx:latest


The results will reveal any CVEs found in your images, the base image, and the updated base image (Figure 5).

Figure 5: The quickview results for the nginx:latest image.

The results will also offer you other commands you can run on the image to get more details, such as:

docker scout cves nginx:latest
docker scout recommendations nginx:latest


I would highly recommend running the recommendations command because it gives you quite a lot of important information about the image.

And that’s the gist of using Docker Scout from both the Docker Desktop GUI and the CLI. If you’re serious about the security of your containers, you should start using this tool right away.

The post Scan Container Images for Vulnerabilities with Docker Scout appeared first on The New Stack.

]]>