Nebula Mist Mac OS
Dress up your desktop and mobile screens with inspiring backdrops from DeviantArt's extensive Wallpaper collection.
- Mac OS X Server 10.5 – also marketed as Leopard Server; Mac OS X Server 10.6 – also marketed as Snow Leopard Server; Starting with Lion, there is no separate Mac OS X Server operating system. Instead the server components are a separate download from the Mac App Store. Mac OS X Lion Server – 10.7 – also marketed as OS X Lion Server.
- Take A Sneak Peak At The Movies Coming Out This Week (8/12) Music festivals are BACK and we’re ready for it; Hollywood history-making at the 2021 Oscars.
VPN mesh versus traditional VPNs
The biggest selling point of Nebula is that it's not 'just' a VPN, it's a distributed VPN mesh. A conventional VPN is much simpler than a mesh and uses a simple star topology: all clients connect to a server, and any additional routing is done manually on top of that. All VPN traffic has to flow through that central server, whether it makes sense in the grander scheme of things or not.
In sharp contrast, a mesh network understands the layout of all its member nodes and routes packets between them intelligently. If node A is right next to node Z, the mesh won't arbitrarily route all of its traffic through node M in the middle—it'll just send them from A to Z directly, without middlemen or unnecessary overhead. We can examine the differences with a network flow diagram demonstrating patterns in a small virtual private network.
All VPNs work in part by exploiting the bi-directional nature of network tunnels. Once a tunnel has been established—even through Network Address Translation (NAT)—it's bidirectional, regardless of which side initially reached out. This is true for both mesh and conventional VPNs—if two machines on different networks punch tunnels outbound to a cloud server, the cloud server can then tie those two tunnels together, providing a link with two hops. As long as you've got that one public IP answering to VPN connection requests, you can get files from one network to another—even if both endpoints are behind NAT with no port forwarding configured.
Where Nebula becomes more efficient is when two Nebula-connected machines are closer to each other than they are to the central cloud server. When a Nebula node wants to connect to another Nebula node, it'll query a central server—what Nebula calls a lighthouse—to ask where that node can be found. Once the location has been gotten from the lighthouse, the two nodes can work out between themselves what the best route to one another might be. Typically, they'll be able to communicate with one another directly rather than going through the router—even if they're behind NAT on two different networks, neither of which has port forwarding enabled.
By contrast, connections between any two PCs on a traditional VPN must pass through its central server—adding bandwidth to that server's monthly allotment and potentially degrading both throughput and latency from peer to peer.
Direct connection through UDP skullduggery
Nebula can—in most cases—establish a tunnel directly between two different NATted networks, without the need to configure port forwarding on either side. This is a little brain-breaking—normally, you wouldn't expect two machines behind NAT to be able to contact each other without an intermediary. But Nebula is a UDP-only protocol, and it's willing to cheat to achieve its goals.
If both machines reach the lighthouse, the lighthouse knows the source UDP port for each side's outbound connection. The lighthouse can then inform one node of the other's source UDP port, and vice versa. By itself, this isn't enough to make it back through the NAT pinhole—but if each side targets the other's NAT pinhole and spoofs the lighthouse's public IP address as being the source, their packets will make it through.
UDP is a stateless connection, and very few networks bother to check for and enforce boundary validation on UDP packets—so this source-address spoofing works, more often than not. However, some more advanced firewalls may check the headers on outbound packets and drop them if they have impossible source addresses.
AdvertisementIf only one side has a boundary-validating firewall that drops spoofed outbound packets, you're fine. But if both ends have boundary validation available, configured, and enabled, Nebula will either fail or be forced to fall back to routing through the lighthouse.
We specifically tested this and can confirm that a direct tunnel from one LAN to another across the Internet worked, with no port forwarding and no traffic routed through the lighthouse. We tested with one node behind an Ubuntu homebrew router, another behind a Netgear Nighthawk on the other side of town, and a lighthouse running on a Linode instance. Running iftop on the lighthouse showed no perceptible traffic, even though a 20Mbps iperf3 stream was cheerfully running between the two networks. So right now, in most cases, direct point-to-point connections using forged source IP addresses should work.
Setting Nebula up
To set up a Nebula mesh, you'll need at least two nodes, one of which should be a lighthouse. Lighthouse nodes must have a public IP address—preferably, a static one. If you use a lighthouse behind a dynamic IP address, you'll likely end up with some unavoidable frustration if and when that dynamic address updates.
The best lighthouse option is a cheap VM at the cloud provider of your choice. The $5/mo offerings at Linode or Digital Ocean are more than enough to handle the traffic and CPU levels you should expect, and it's quick and easy to open an account and get one set up. We recommend the latest Ubuntu LTS release for your new lighthouse's operating system; at press time that's 18.04.
Installation
Nebula doesn't actually have an installer; it's just two bare command line tools in a tarball, regardless of your operating system. For that reason, we're not going to give operating system specific instructions here: the commands and arguments are the same on Linux, MacOS, or Windows. Just download the appropriate tarball from the Nebula release page, open it up (Windows users will need 7zip for this), and dump the commands inside wherever you'd like them to be.
On Linux or MacOS systems, we recommend creating an /opt/nebula
folder for your Nebula commands, keys, and configs—if you don't have an /opt yet, that's okay, just create it, too. On Windows, C:Program FilesNebula is probably a more sensible location.
Certificate Authority configuration and key generation
The first thing you'll need to do is create a Certificate Authority using the nebula-cert program. Nebula, thankfully, makes this a mind-bogglingly simple process:
What you've actually done is create a certificate and key for the entire network. Using that key, you can sign keys for each node itself. Unlike the CA certificate, node certificates need to have the Nebula IP address for each node baked into them when they're created. So stop for a minute and think about what subnet you'd like to use for your Nebula mesh. It should be a private subnet—so it doesn't conflict with any Internet resources you might need to use—and it should be an oddball one so that it won't conflict with any LANs you happen to be on.
Nice, round numbers like 192.168.0.x, 192.168.1.x, 192.168.254.x, and 10.0.0.x should be right out, as the odds are extremely good you'll stay at a hotel, friend's house, etc that uses one of those subnets. We went with 192.168.98.x—but feel free to get more random than that. Your lighthouse will occupy .1 on whatever subnet you choose, and you will allocate new addresses for nodes as you create their keys. Let's go ahead and set up keys for our lighthouse and nodes now:
Now that you've generated all your keys, consider getting them the heck out of your lighthouse, for security. You need the ca.key file only when actually signing new keys, not to run Nebula itself. Ideally, you should move ca.key out of your working directory entirely to a safe place—maybe even a safe place that isn't connected to Nebula at all—and only restore it temporarily if and as you need it. Also note that the lighthouse itself doesn't need to be the machine that runs nebula-cert—if you're feeling paranoid, it's even better practice to do CA stuff from a completely separate box and just copy the keys and certs out as you create them.
AdvertisementEach Nebula node does need a copy of ca.crt, the CA certificate. It also needs its own .key and .crt, matching the name you gave it above. You don't need any other node's key or certificate, though—the nodes can exchange them dynamically as needed—and for security best practice, you really shouldn't keep all the .key and .crt files in one place. (If you lose one, you can always just generate another that uses the same name and Nebula IP address from your CA later.)
Configuring Nebula with config.yml
Nebula's Github repo offers a sample config.yml with pretty much every option under the sun and lots of comments wrapped around them, and we absolutely recommend anyone interested poke through it see to all the things that can be done. However, if you just want to get things moving, it may be easier to start with a drastically simplified config that has nothing but what you need.
Lines that begin with a hashtag are commented out and not interpreted.
Warning: our CMS is mangling some of the whitespace in this code, so don't try to copy and paste it directly. Instead, get working, guaranteed-whitespace-proper copies from Github: config.lighthouse.yaml and config.node.yaml.
There isn't much different between lighthouse and normal node configs. If the node is not to be a lighthouse, just set am_lighthouse
to false
, and uncomment (remove the leading hashtag from) the line # - '192.168.98.1'
, which points the node to the lighthouse it should report to.
Note that the lighthouse:hosts
list uses the nebula IP of the lighthouse node, not its real-world public IP! The only place real-world IP addresses should show up is in the static_host_map
section.
Starting nebula on each node
I hope you Windows and Mac types weren't expecting some sort of GUI—or an applet in the dock or system tray, or a preconfigured service or daemon—because you're not getting one. Grab a terminal—a command prompt run as Administrator, for you Windows folks—and run nebula against its config file. Minimize the terminal/command prompt window after you run it.
root@lighthouse:/opt/nebula# ./nebula -config ./config.yml
That's all you get. If you left the logging set at info the way we have it in our sample config files, you'll see a bit of informational stuff scroll up as your nodes come online and begin figuring out how to contact one another.
If you're a Linux or Mac user, you might also consider using the screen utility to hide nebula away from your normal console or terminal (and keep it from closing when that session terminates).
Figuring out how to get Nebula to start automatically is, unfortunately, an exercise we'll need to leave for the user—it's different from distro to distro on Linux (mostly depending on whether you're using systemd or init). Advanced Windows users should look into running Nebula as a custom service, and Mac folks should call Senior Technology Editor Lee Hutchinson on his home phone and ask him for help directly.
Conclusion
Nebula is a pretty cool project. We love that it's open source, that it uses the Noise platform for crypto, that it's available on all three major desktop platforms, and that it's easy...ish to set up and use.
With that said, Nebula in its current form is really not for people afraid to get their hands dirty on the command line—not just once, but always. We have a feeling that some real UI and service scaffolding will show up eventually—but until it does, as compelling as it is, it's not ready for 'normal users.'
Right now, Nebula's probably best used by sysadmins and hobbyists who are determined to take advantage of its dynamic routing and don't mind the extremely visible nuts and bolts and lack of anything even faintly like a friendly interface. We definitely don't recommend it in its current form to 'normal users'—whether that means yourself or somebody you need to support.
Unless you really, really need that dynamic point-to-point routing, a more conventional VPN like WireGuard is almost certainly a better bet for the moment.The Good
- Free and open source software, released under the MIT license
- Cross platform—looks and operates exactly the same on Windows, Mac, and Linux
- Reasonably fast—our Ryzen 7 3700X managed 1.7Gbps from itself to one of its own VMs across Nebula
- Point-to-point tunneling means near-zero bandwidth needed at lighthouses
- Dynamic routing opens interesting possibilities for portable systems
- Simple, accessible logging makes Nebula troubleshooting a bit easier than WireGuard troubleshooting
The Bad
- No Android or iOS support yet
- No service/daemon wrapper included
- No UI, launcher, applet, etc
The Ugly
- Did we mention the complete lack of scaffolding? Please don't ask non-technical people to use this yet
- The Windows port requires the OpenVPN project's tap-windows6 driver—which is, unfortunately, notoriously buggy and cantankerous
- 'Reasonably fast' is relative—most PCs should saturate gigabit links easily enough, but WireGuard is at least twice as fast as Nebula on Linux
One of the major announcements last week atDockerCon 2017 was LinuxKit, aframework for creating minimal Linux OS images purpose built for containers. Docker has been using the tools that makeup LinuxKit for some time and the products derived from the tooling include Docker for Mac.
Sounds cool, and the best way to learn about a tool is to dive into using it! Given the extra time that I had on theplane home from Austin I did just that and would like to share with you an easy way to get started using LinuxKit.
Nebula Mist Mac Os X
To get going you’ll need a few things:
- A 2010 or later Mac (a CPU that supports EPT)
- OS X 10.10.3 or later
- A Git client
- Docker running (In my case, 17.04.0-ce-mac7 (16352))
- GNU make
- GNU tar
- Homebrew
Let’s get started!
Installing xhyve
First, we’ll need to install xhyve. Xhyve is a hypervisor which is built on top of OSX’s Hypervisor.framework that allows us to run virtual machines in user space. It is what Docker for Mac uses under thehood! There are a couple ways to do this, the easiest is to use Homebrew. Fire up your favoriteterminal and install:
Once you have that complete, see if it works by
If you get a response with the various available flags, you are ready for the next step.
Building the moby tool
The next step in getting started is to build the moby tool. This tool is what will provide the functionality to read inthe yaml we will specify later, execute the various docker commands to build the Linux OS, and if you’d like, run theimage. In this example, we’ll be using xhyve to run the image rather than project moby’sHyperKit. I found that building the kernel and initrd images takes quite a bit lesstime than a qcow image, so if you need to iterate quickly xhyve is the way to go.
First, cd into whatever workspace you’d like to use, and clone the LinuxKit repo:
Then, make and install the moby binary:
Once you have this complete, you should be ready to build your first Linux image using LinuxKit.
Customizing and Building a Linux image
With the prerequisites out of the way, let’s build our first image. The LinuxKit project includes a few examples, someof which were demoed at DockerCon. Rather than getting super complicated out of the gate, let’s build a simple imagethat fires up a redis instance on boot.
First, you’ll need to start with a yaml file that describes your Linux image. Pulling from the examples, we’ll take thebase docker image and add an entry for redis:
Let’s quickly examine this file. Without getting into all of the details (which are available on the LinuxKit git repo)we’ll focus on the major blocks. The beginning of the file spells out the “base” Linux docker image that defines thekernel and kernel command line options. Next, the file describes how the base OCI complaint LinuxKit images that arerequired for init. After that, the file describes how more base images that will be run by runc sequentially before anyother services are started. Next up are the services (again OCI compliant images) that will be started by containerd,which are meant to remain running. And last, the output files which moby will create as part of the build process. Withthe base file created, let’s build our image!
You’ll see output that looks like this:
And that’s it! Here are the files that are created:
- The raw kernel image (linux-redis-kernel)
- The init ramdisk (linux-redis-initrd.img)
- The commandline options you’ll need to provide to xhyve in a file
Running the LinuxKit output with xhyve
Nebula Mist Mac Os Download
Now that we’ve built the image, let’s run it! First, we’ll have to create a script that tells xhyve how to instantiatethe image as a virtual machine. Now, I did say before that if you wanted to, you could have set the moby tool to outputa qcow image, and then use moby run
to run the image as a VM. I’d rather use xhyve, as this is how other non-LinuxKitoperating systems can be run on the Hypervisor.framework. Let’s do it.
We’ll need one thing to run our image: A script that defines to xhyve the parameters for building our virtual machine.The main items needed in this script are what we got from the moby build process. Here’s and example to fire up ourredis server:
Once you have the file created, make it executable and run it. A couple things to note at this point:
- If you have any VPN running to secure your connection to a wireless network, etc., shut it down. There are some knownissues with routing traffic when the VPN is up. There may be a fix on the interwebs for this, didn’t have time toresearch the proper route which needs to be added to operate with VPN networking in place on OS X.
- You’ll need to execute the script with superuser privileges if you are going to network the virtual machine.
That done, let’s run it:
Once you run this, a bunch of output will fly by as the virtual machine is run. At the end, you’ll get a command lineprompt on your newly minted VM!
Let’s see if the redis server is running:
Looks like our machine is listening on 6379, the redis port. Now, let’s see if that is exposed properly on the networkand reachable. First, find the IP address of your VM:
Our IP is 192.168.64.17. From the Mac OS X host, we’ll use netcat and test the connection and server status:
Once you are done poking around, issue halt to shutdown the VM:
Success! We have a working LinuxKit image that is running a userland VM on OS X! Very cool. This is just the beginning.You can use the templates to create other images, customizing them to your liking. Also, don’t need to use redis. Tryusing your own docker images with your own services, and stand up the VMs.
Nebula Mist Mac Os Catalina
In future posts, I’ll explore how to use the other Moby Project tools, like InfraKit and HyperKit, leveraging these tostand up LinuxKit OS images to provide more than just a quick testbed.
Oh, and take a look at the size of the two files created by LinuxKit and represents our Linux OS :)