I recently got interested in trying out iSCSI, since I had spare capacity on my server. For those unaware, iSCSI can expose block devices over a network. Instead of a file system, it exposes a (virtual) disk, and lets the system connecting to it manage high-level details, including its own file system. This has very different trade-offs from file sharing like SMB/NFS; sharing the disk isn’t really possible, but you avoid a lot of the performance impact from (often different) file system semantics.
This makes it possible to do things you might otherwise not recommended with file sharing, like hosting a Steam library on it. Especially so if you have the iSCSI setup on its own network. Remember, most file systems assume a mostly direct connection to disk. Running this over a shared Ethernet connection, let alone WiFi might not be the best idea.
Also note that I’m not describing a secure setup here. This is very much “baby’s first”, and should only be done on a secure network, or as an experiment. Securing it will involve properly configuring things like portal groups, and isn’t covered in this article. I might cover it in a later article.
This also synthesizes a lot of information I found online; in particular, this basically digests some information in the FreeBSD handbook about the iSCSI target subsystem and ZFS volumes, plus Red Hat and Oracle documentation on iscsiadm.
Recently, I picked up a Dell XPS 13 9300 – while a few years old, I picked it up for quite a bit market value ($500 CAD – when equivalent-ish models range from $600 to $900 on the used market). While I don’t plan to use it as my daily driver, I did have a need for a newer Intel machine – I didn’t have anything after Haswell; just my Ryzen desktop and M1 MacBook Air. However, I decided to give a shot, and overall was pleased by what I saw, albeit with some caveats. Here’s what I think…
This post was inspired by some controversy with Valve and their support for Linux, but the bulk of it comes from long-term observation. One of the biggest impacts with the viability of Linux on the desktop was Valve’s Proton, a Wine fork integrated in Steam allowing almost any Windows game to work out of the box. To Linux users, life was good. However, with the recent announcement of the Steam Deck, a handheld device powered by Linux, Valve’s marketing towards developers explicitly mention no porting required. Valve’s been aggressive with this message enough that they’ve allegedly told developers simply not to bother with Linux ports anymore; enough that it makes commercial porters like Ethan Lee concerned.
However, I suspect this is the long-term result of other factors, and games are only one aspect of it. After all, we all know the Year of the Linux Desktop is around the corner, along with nice applications. Linux won’t rule the world just from games, even if some people really want it to be true. How did it come to this, and why?
I’m someone who cares about making software portable. In fact, I actually have a job basically doing so. For most Unix-shaped things (better known as things, since Unix destroyed all competition), the POSIX standard exists to codify common attributes and provide a common ground. Unfortunately, this is made far more complicated by systems both doing many things outside of POSIX’s lowest common denominator, and systems just not implementing POSIX correctly. People tend to think “portability” is whatever operating systems they use, and assuming the lowest common denominator is that. While many guides recommend writing software in a disciplined (or tortured, if you disagree) manner with separate compilation units for platform differences when possible, the reality is your codebase will have #ifdefs and a configure script if it does anything useful. Not to mention the increasing irrelevance of the standard itself.
tl;dr: As much as I respect the efforts undertaken by groups like Gnome and elementary, I have to wonder if what they’re building is barely enough, and provides an illusion of substance.
There’s been a lot of effort spent on the Linux desktop. The groups I respect the most on this front are Gnome and elementary, due to their focus on UX design and trying to do new things. While Gnome has been controversial due to their design and stance towards design, I think a lot of the controversy on that front is unmerited (i.e Gnome’s design isn’t actually appropriate for tablets as much as the peanut gallery thinks). I appreciate that someone is trying to do something other than “Windows 98 stomping on a human face, forever”, and it’s what I use on my desktop. Controversial for other reasons (also unmerited, a man’s gotta eat; that desktop won’t happen with getting paid in exposure), elementary’s design has been considered very nice (often making it recommended for “my first distro”), if a bit derivative at first glance. What makes it more interesting in the morass of many OSS UX clones is UX as a priority/value (instead of something that’s just there) and iterating on existing UX. Sometimes it works out, doesn’t it doesn’t, but I respect the attempt at trying something new and seeing if it’s better.
However, I wonder if what they’re doing is enough. They have a desktop, many components of that desktop, and human interface guidelines (elementary, Gnome); all components you need. What I think is missing is the substance. Where’s the ecosystem of applications that embrace the HIG, and how does the intricacies of the of the environment come into play for complex applications and situations?
I needed to connect to a Fortinet SSLVPN, but the certificate on it had expired. While the official Mac client prompts and lets you connect anyways, Linux with NetworkManager (and the FortiSSLVPN plugin) would refuse without providing any messages. Unfortunately, I couldn’t ask the administrator to renew the certificate. What you can do is add the certificate as a trusted certificate for that VPN. Unfortunately, the interface to do this is unclear, so I’ll try to explain it here.