Incus which is named after the Cumulonimbus incus or anvil cloud, written in Go is a next-generation system container, application container, and virtual machine manager.

History

Based on LXC for containers and QEMU for virtual machines, it offers a seamless cloud-like experience scaling from a developer’s laptop to a full cluster of up to 50 servers.

When running a system container, Incus simulates a virtual version of a full operating system. To do this, it uses the functionality provided by the kernel running on the host system.

When running an application container, Incus runs isolated applications within the host’s operating system using container images, similar to how Docker operates.

When running a virtual machine, Incus uses the hardware of the host system, but the kernel is provided by the virtual machine. Therefore, virtual machines can be used to run, for example, a different operating system.

In addition to managing containers and VMs, Incus also provides a variety of options to manage storage and network.

The Incus project was created/forked from LXD by Aleksa Sarai as a community driven alternative to Canonical’s LXD.

Today, it’s led and maintained by many of the same people that once created LXD.

April release of TrueNAS will add Incus in the base OS and integrated in the UI.

release 6.0 LTS is supported until June 2029

Cheatsheet

Instances

cmd description
incus top displays CPU usage, memory usage, and disk usage per instance
incus list list instances
incus launch images:ubuntu/20.04 <container_name> launch a container
incus launch images:alpine/edge <container_name> —network <network> launch a container on a network for example macvlan
incus launch images:archlinux/current arch --vm launch a VM
incus launch images:ubuntu/20.04 <instance> -c limits.cpu=1 -c limits.memory=192MiB limit CPU/RAM
incus launch ... --profile launch instance with profile config, last one have precedence
incus copy <instance1> <instance2> copy a container
incus start <instance> start a container (for example after a copy which doesn’t start them)
incus info <instance> get details on a container or VM
incus stop <instance> stop it
incus delete <instance> delete it
incus delete <instance> --force may be required (for example if you don’t want to stop it first)
incus exec <instance> -- free -m check RAM consumption
incus exec <instance> -- bash launch interactive shell in your instance
incus exec <instance> -- nproc check proc availability
incus console <instance> —type=vga access vm screen
incus config device show <instance> Show instance devices
incus config set <instance> limits.memory=128MiB update config while running
incus export <instance> [target] [--instance-only] [--optimized-storage] import | export instances as backup tarballs
incus import <backup file> [<instance name>] Import backups of instances including their snapshots.
exit exit the instance

Images

cmd description
incus image list list images
incus image delete <img> delete an image
incus image info|show <img> show useful information about images or properties
incus image get-properties <img> get image properties
incus image edit <img> launch a text editor to edit the properties
incus image alias create <alias> <img> create aliases for existing images
incus image alias delete <alias> delete alias
incus image import <tarball>|<directory>|<URL> [<rootfs tarball>] import images
incus image copy [<remote>:]<image> <remote>: [flags] copy images between servers
incus image export <img> [<target>] Export and download images
incus image list images: list images from linuxcontainers repository

Storage

cmd description
incus storage list list storage pools
incus storage create <pool> create storage pools
incus storage create <pool> <driver> < config.yaml create storage pool from YAML
incus storage delete <pool> incus storage delete
incus storage info|show show useful information or configurations and resources
incus storage edit <pool> edit storage pool configurations as YAML
incus launch <image> <instance> --storage <pool> reference a storage pool while launching an instance
incus storage volume <pool> list|create|rename|move|delete|info|show|snapshot|attach|detach|attach-profile|detach-profile|set|unset|edit lots of operation available to manage volumes within pool
incus storage volume create <pool> <volume> size=1GiB initial.uid=1000 initial.gid=1000 initial.mode=0700 new feature in 6.8 to set uid, gid and mode of volume
incus storage volume attach <pool> <filesystem> <instance> <path_within_instance> Attach a volume to an instance
incus config device add <instance> <device> disk pool=<pool> source=<volume_name> [path=<path_within_instance>] attach a volume to an instance thru configuration
incus storage volume set <pool_name> <volume_name> size <new_size> grow a storage volume, can shrink block based ones.
incus move <instance_name> --storage <target_pool_name> more an instance to another pool
incus profile device add <profile> root disk path=/ pool=<pool> associate a storage pool to a profile
incus file push <source path> ... <instance>/<path> push the file back to the instance
incus file pull <instance>/<path> .... <target path> pull a file from your instance
incus file pull first/var/log/syslog - | less incus console --show-log check instance logs

storage drivers <dir | btrfs | lvm | zfs | ceph | cephfs | cephobject> Best options are ZFS (more reliable) and Btrfs but this one isn’t a good idea’s for VMs, performance will be pretty bad. Prefer ZFS which works great for both VMs and containers. Best to dedicate a full disk or partition instead of a loop device.

Snapshots

cmd description
incus snapshot list <container> list all snapshot in a container
incus info <container> list all of the information on a container including snapshots
incus snapshot create <container> <snapshot> create a snapshot. —reuse allow to overwrite an existing one
incus snapshot restore <container> <snapshot> revert a container back to the state when the snapshot was taken
incus snapshot delete <container> <snapshot> delete a snapshot
incus snapshot rename <container> <snapshot> <snapshot_new_name> rename a snapshot
incus config set <container> snapshots.schedule @daily incus config set arch snapshots.schedule “0 6 * * *” auto snapshotting at regular interval
incus config edit <container>/ edit snapshotting configuration
incus publish <container>/<snapshot> --alias <alias> description="<description>" create an image from a snapshot
incus storage volume snapshot create <pool> <volume> [<snapshot_name>] creat a snapshot of a volume

Networking

cmd description
incus network list list available networks [-c 46demnstu]
incus network create incusbr1 ipv4.address=10.10.20.1/24 ipv6.address=none example of creation of a managed network
incus network create incusmacvlan parent=ens3 --type incusmacvlan example of creation of a macvlan network usefull to being seeing outside
incus launch images:alpine/edge/cloud alp1 --network incusmacvlan launch an instance on a specific network, it will inherit all the settings
incus profile device add macvlan-profile eth0 nic name=eth0 nictype=macvlan parent=ens3 add a macvlan nic to a profile which will connect the instance to the outside world directly
incus network attach <network> <instance> <device> attach a network to an instance
incus network set incusbr1 ipv4.address=10.10.10.1/24 update ip address
incus config device add <instance> <device> nic nictype=<type> add a nic to an instance
use macvlan to expose instance or bridged for internal connectivity
you can then
forward port or use load balancers
incus network get <network_bridge> ipv4.address get the ip addres of the bridge
incus network list-leases <network_bridge> list DHCP leases

Networks types <bridge|physical|ovn|sriov|macvlan>`

Macvlan and sriov networks allows to specify presets to use when connecting instances to a parent interface. Instance NICs can simply set the network option to the network they connect to without knowing any of the underlying configuration details. Nic types bridged|macvlan|sriov|ipvlan|p2p|routed> There is many more to incus networking, like network forwards or network integration to connect to remote networks, ACLs and load balancers but this is too advanced for this simple cheatsheet.

Profiles

cmd description
incus profile list list all profiles
incus profile create <profile> < config.yaml create a profile from YAML
incus profile delete <profile> delete profile
incus profile rename <profile> rename profile
incus profile show <profile> show profile configurations
incus profile assign <instance> <profile1, profile2, …> assign sets of profiles to instances
incus profile add <profile> <instance> add profiles to instances
incus profile remove <profile> <instance> remove profiles from instances
incus profile device [ add | get | list | remove | set | show | unset ] …. manage profile devices
incus profile [ set | unset ] <profile> <key><value>|file> <key><value> set | unset profiles configuration keys
  • Profiles gather configurations that can be applied to instances
  • Configuration is applied to all instance by default profile if nothing overload it
  • Instances can overload any of it with args at launch

Command Aliasing

cmd description
incus alias list list command alias
incus alias add sebbraun 'exec @ARGS@ -- su -l sebbraun' create a command alias to enter container as user sebbraun

Monitor & more

cmd description
incus web ui [remote:]
incus monitor —pretty —loglevel=error —type=logging tail the errors
incus query <API path> Send a raw query to the server
incus query ‘/1.0/instances/<instance>/console?project=<project>&type=vga’ > out.png grab VM screenshot (6.8 feature)
cat /var/log/incus/incusd.log server log file

Installation

on Arch Linux

pacman -S incus
systemctl start incus
systemctl status incus
systemctl enable incus
incus admin init
gpasswd -a <user> incus #to allow to use incus
gpasswd -a <user> incus-admin #to allow to control incus as non root user
echo "0:1000000:1000000000" | sudo tee /etc/subuid /etc/subgid
systemctl restart incus

To view the configuration dump if you missed it during installation, you can get it with

incus admin init --dump

For example, a storage pool named “pooldir” would be located at

/var/lib/incus/storage-pools/pooldir

webui installation

yay -S incus-ui-canonical
incus config edit
config:
  core.https_address: '[::]:8443'
systemctl restart incus
incus webui \#allows you to connect locally

to connect locally access it from the URL you get from the output of the above command

  • create a certificate
  • download incus-ui.crt
  • trust it with
incus config trust add-certificate Downloads/incus-ui.crt
  • grab the corresponding on the distant machine incus-ui.pfx
  • and then double click on this file to add to your OS
  • you should now be able to access it remotely thru [https://INCUS_SRV_IP:8443](https://INCUS_SRV_IP:8443) [chrome use Safari]
  • but I discovered it’s way easier to access it using the newly released command incus webui

on MacOS

You can drive your Incus server from MacOS, you just have to install Incus and instruct your client to connect to it.

First generate a token from your incus server for your MacOS client

incus config trust add myclient
brew install incus
incus remote add <remote_name> https://<login>:<pwd>@<ip>/

use the token provided above to finish remote setup and select a default remote

incus remote switch <remote_name>
incus webui

Networking

  • by default incus admin init creates a incusbr0 bridge that is managed by Incus, and gives private IPv4 and IPv6 IP addresses to you instances with DHCP.
incus network list
incus network show incusbr0
  • beware physical network type is stealing the nic from the host to give it to the instance, useful for Open vSwitch use case
  • so if you want multiple IP on the same nic with different mac addresses you need to use macvlan type, or ipvlan if you want same mac address
  • but macvlan due to security of WPA/WPA2 may not work on wifi which prevent two IP coming out of the same WIFI
  • and macvlan do not allow the instance to speak to its host.

IP addressing for managed networking can be changed

incus network set incusbr0 ipv4.address=10.10.10.1/24

Create a profile for Instance that require direct connectivity

incus profile create macvlan-profile
incus profile device add macvlan-profile eth0 nic name=eth0 nictype=macvlan parent=ens3
config: {}
description: ""
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: ens3
    type: nic
name: macvlan-profile
used_by:
- 
project: default

Now trying to launch an instance and connect it to physical nic, I’ve create a profile with an empty networking setup

incus launch images:alpine/3.21 alp3 --profile default --profile macvlan-profile

later profile overwrite settings of previous one, in this case macvlan win over bridged networking setup of default profile.

It’s also possible to add network device later as well

incus network attach ens3 archopen eth0 eth0