Compiling the linux kernel docs

In the last article, I said that compiling and installing source versions of software was akin to “going rogue”. I must confess that I have compiled from source and installed software that wasn’t in my distribution, most recently TexStudio, as being one of the larger projects, requiring tons of other libraries and whatnot to also be installed (or quite often, compiled from source on the side), since it wasn’t a part of the linux distro I was using at the time. It also wasn’t a part of Cygwin, and I compiled for that too. It was a great way to kill an afternoon.

But there was a time that I had compiled the kernel from source. It was necessary for me, as speed was an issue and I had slow hardware at the time. What I also had was a mixture of hardware pulled from different computers at different times. I researched specs on sound cards, network cards, video cards and the motherboard chipsets, and knew what specs to tweak on the kernel compilation dialogs, so I could get the kernel to do the right thing: which is to be fast and recognize all my hardware. I was doing this before the days of modules, with the version 1.x kernel. It worked, and it was noticeably faster than the stock kernels. X-Windows on my 80486 PC ran quite well with these compiled kernels, but was sluggish to the point of un-useable with a stock kernel running. Every few versions of the kernel, I would re-compile a new kernel for my PC, and pretty soon using the tcl/tk dialogs they had made things pretty easy, and I could answer all the questions from memory.

But then that all ended with version 2. Yes, I compiled a version 2 kernel from source, and yes, it ran OK. But it also had modules. The precompiled kernels were now stripped down and lean, and the modules would only be added as needed when the kernel auto-detected the presence of the appropriate hardware. After compiling a few times, I no longer saw the point from a performance standpoint, and today we are well into kernel version 5.3, and I haven’t compiled my own kernel for a very long time.

For the heck of it, I downloaded the 5.3 kernel, which uncompressed into nearly 1 gigabyte of source code. I studied the config options and the Makefile options, and saw that I could just run “make” to create only the documentation. So that’s what I did.

It created over 8,500 pages of documentation across dozens of PDF files. And 24 of them are zero-length PDFs, which presumably didn’t compile properly, otherwise the pagecount would have easily tipped the scales at 10,000. The pages were generated quickly, the 8,500 or more pages were generated with errors in about 3 minutes. The errors seemed to be manifest in the associated PDFs not showing up under the Documentation directory. I have a fast-ish processor, an Intel 4770k (a 4th generation i7 processor), which I never overclocked, running on what is now a fast-ish gaming motherboard (an ASUS Hero Maximus VI) with 32 gigs of fast-ish RAM. The compilation, even though it was only documentation, seemed to go screamingly fast on this computer, much faster than I was accustomed to (although I guess if I am using 80486’s and early Pentiums as a comparison …). The generated output to standard error of the LaTeX compilation was a veritable blur of underfull hbox’es and page numbers.

For the record, the pagecount was generated using the following code:

#! /bin/bash
list=`ls *.pdf`
for i in $list ; do
        # if the PDF is of non-zero length then ...
        if [ -s "${i}" ] ; then 
                j=`pdfinfo ${i} | grep ^Pages`
                j=`awk '{gsub("Pages:", "");print}' <<< ${j}`
                # give a pagecount/filename/running total
                echo ${j}	    ${i}    ${tot}
                # tally up the total so far
                tot=$(($tot + $j))

echo Total page count: ${tot}

The next step for Linux development

As you might know, there are nearly 300 Linux distributions (currently 289– low in historical terms), and this is a testament to how successful the Linux kernel has become on the PC, as well as other devices, especially in relation to previously-existing *NIX systems, who have either fallen by the wayside, or are barely existing in comparison. A *NIX system that might be a distant second might be BSD UNIX.

Just earlier today, I observed that for the installation of TexStudio, for instance, there are two installation images for MS-Windows (all versions from windows 7 on up), the only distinction being between 32 and 64-bit. On the other hand, there were a plethora of Linux images, all depending on which distro of Linux you used. My distro is Ubuntu Studio. I use Gnome as the window manager. The only Ubuntu-based Linux images were for xUbuntu (which uses xfce as a window manager).

Apparently, it also seems necessary to have to compile a separate image each time a linux distro is upgraded. The 19 images I counted for xUbuntu were for versions 14 through to 19. Now, I understand that seperate images need to be compiled for different processors, but most of these are for PC’s running with 32 or 64-bit processors. The same was true for each upgrade of Debian, or Fedora, or OpenSuse. And even then, they needed separate binaries from each other. There are easily more than 50 Linux-based installation images you can choose from at the moment.

The “package” system that is now near universal in the Linux environment provides a convenient way for sysops to assure themselves that installations can happen without problems happening to the system. Before that, one compiled most new software from source, tweaking system variables and modifying config files to conform to whatever you had in your system. This has since become automated with “make –configure” or “make config” that most source has these days. In other words, modernization of Linux seems to mean increasing levels of abstraction, and increasing levels of trust that the judgement of a “make config” trumps human judgement of what needs configuring for the source to compile. On a larger scale, trusting a package manager over our own common sense can be seen as working “most of the time”, so there is a temptation to be lazy and just find something else to install besides whatever our first choice was in case the installation failed due to a package conflict. Installing software by compiling from source, once seen as the rite of passage of any sensible Linux geek, is now seen as “going rogue”, since that is now seen as subverting the package manager, and in a sense, taking the law into your own hands.

Of course, Linux installations still exist for the latter kind of Linux user. The foremost, in my opinion, is Slackware (if you screw up, at least something will run) and a close second is Arch Linux. It is my understanding that Arch Linux requires much more knowledge of your own hardware in order to even boot the system; whereas Slackware will be likely to at least boot if your knowledge of the hardware is not quite so keen (but still keen). My experience with Slackware is in the distant past, so I am not sure what is the norm these days, although I understand they still use tarballs, which I remember allowed me to play with the installation by un-compressing it in a directory tree not intended for the installation to see what was inside before I committed myself to deciding whether it could be installed. The tarballs are compressed nowadays with “xz” compression, giving the files a “.txz” extension.

But I digress. Getting back to installation images, it should not be too difficult for people who manage these linux distros to make it less necessary to have so many different images for the same Linux distribution. In the MS-Windows example, only one version of TexStudio was needed across three or four different Windows versions. I am running windows 7 with software that didn’t exist in the days of Windows 7, and with other software that originated from Windows 2000. All of it still runs, and runs quite well. Fixing this problem is hopefully do-able in the near future.

Gnome, a tale of a dead fail whale with a happy ending …

I moved my window manager from xfce to gnome today, and spent most of the day so far getting gdm3 to work. For a while, I was using two window managers, then narrowed it down to gdm3 and uninstalled the other one.

The login manager failed to come up, and for most of this morning I was stuck in a character console. In gnu/linux, strange things happen when you read a lot of documentation and error messages. I began to see artifacts that are in themselves hilarious, although after hours of poring through debug messages and error messages, I first thought I needed a long break. But no. The same phrase can be google’d, and others have reported seeing it, thus confirming my strange experience.

The error I saw was

We failed, but the fail whale is dead. Sorry.

So, what on Earth is a “fail whale”? It appears to mean that a part of the server that issues error messages, has died. Apparently, gdm3 itself didn’t die, since running ps showed that it was still running, although not running a login screen.

It turns out that the “fail whale” was a meme created by someone named Yiying Liu to refer to errors reported by Twitter. I guess I missed out on that meme.

Somewhere in the thicket of error and debug messages was a reference to the fact that /usr/share/gnome-sessions/sessions/ubuntu.session did not exist. I went to that location as root, and symlinked gnome.session to ubuntu.session.

ln -s gnome.session ubuntu.session

That appeared to be all that was needed. I was able to log on to a gnome desktop.

BoUoW: Bash on Ubuntu on Windows

Tux is telling you the most current Ubuntu running for Windows for BoUoW.

I am not proud of possibly inventing the ugly acronym “BOUOW”, but “BASH on Ubuntu on Windows” appears to compel it. Maybe we can pronounce it “bow-wow” — not sure if that’s complementary. Just did a Google search, and, no, as predicted I couldn’t have invented it: It is variously acronymed: B.O.U.O.W., or BoUoW. It has been around since at least March of 2016, giving end users, computer geeks, and developers plenty of time to come up with something of a nickname or acronym.

But I actually mean to praise BoUoW, and to give it considerably high praise. This is a brave move on Microsoft’s part, and a long time coming. MS has made *NIX access available in its kernel for some time now, thus making *NIX conventions possible on the command line like certain commands in the Power Shell. The user has to enable the capability in the Windows 10 settings (“Windows Subsystem for Linux” (WSL)), and as Admin, the kernel has to be set to “Developer mode”, and follow the instructions on the MSDN website to download binaries and to enable a bash shell on either the command line or PowerShell.

BoUoW takes advantage of the WSL to do impressive things like use the same network stack as Windows 10 itself. This is because with WSL enabled, a UNIX command such as SSH can now make calls directly to the Windows 10 kernel to access the network stack.

This is, by Microsoft’s admission, a work in progress. It would worry me if they would not have said that. But lots of things do work. vi works and is symlinked (or somehow aliased) to vim. The bash shell comes with some other common aliases like “ll” for “ls -l”, for instance, and apparently, as part of the installation, you actually have a miniature version of Ubuntu, complete with a C compiler, and an image of Ruby, Perl, Python, and if it isn’t installed, you can always use “apt-get” to install it.

One of the security features has the disadvantage of conducting an install of BoUoW separately for each user. If a user types “bash” in a cmd window, and if BoUoW is not installed for that user, the install happens all over again, and the image goes under each user’s AppData directory requesting a BoUoW install. If you are using an SSD for C: drive like me, then you might find that limiting due to a shortage of space.

There are many things not recommended yet. If you are a serious web developer, for example, you would find many of the things you want, such as mySQL, are not currently working the right way. If you are a systems programmer, then you’ll find that ps and top only work for unix-like commands, so I wouldn’t use BoUoW for any serious process management. That being said, it does contain the old standbys: grep, sed, and awk.

The compiling and output of my “Hello, world!” program, also showing the source code.

gcc had to be installed separately. The binary it created for my “Hello, world!” program lacks the Microsoft .exe extension. And as it is for Unix binaries, it lacks any default extension. It is using gcc version 4.8.4. The current version is 6.3. This older gcc usually won’t pose a problem for most users.

The current stable Ubuntu is 16.04. BoUoW uses the previous stable version, 14.04, and thus has slightly older versions of Perl (5.18), Python (2.7.6), bash (4.3.11), Ruby (1.8) (available using apt-get), vim (7.4), and other software. Vim, however, appears to be the “large” version, which is expandable, using plugins like Vundle, which is good news. I don’t suspect that these slightly older versions would cause anyone problems, except possibly for Python, which has gone all the way up to version 3.5.2 since. You are warned also that it is possible that under Python or Perl, you might run into problems due to not all of their libraries running correctly under BoUoW. Not surprising, since Python has hundreds of installable libraries and Perl has thousands of them. Could take a while.


YALD (yet another linux distro) Knoppix 7.4.2

Linux Pro Magazine, featuring GIMP in its Winter 2015 Edition.

To add to the distros I have already reviewed in terms of their suitability for running on the Hewlett-Packard TX2 or TM2 tablets, I had not said anything about the Knoppix distribution specifically. I saw one sold in a special edition of Linux Pro Magazine, and in a fit of irrational impulse purchasing, ponied up my 20 bucks with tax, and tried it on my laptop.

Linux Pro Magazine was using the Knoppix CD to actually showcase GIMP, but with pretty close to the most recent versions of GIMP installed on all my windows and Linux installations (I do run a blog after all), I do not need to be sold on GIMP. It’s a great free open-source package for editing and manipulating photos, in the way of Photoshop. It would have been nice if they could have an article on how to write your own scripts for the script-fu feature in GIMP. This ever elusive and mysterious feature remains largely shrouded in secrecy except for the few websites to post a page or so on it.

But I wanted to see how the latest Knoppix ran on my laptop. Indeed, version 7.4.2 of Knoppix is the latest version, according to the website. Knoppix is the Linux distribution that is known for having a live operating system on it, so if you want to try Knoppix, there is no installation needed. My HP TM2, in the grand tradition of “modern” computers having fewer and fewer media inputs than ever before, comes without a built-in DVD-R drive. So, I plugged a USB2 one in (the TX2 has no USB3 inputs, not that it would matter for a DVD-R anyway) and booted into Knoppix.

And I was pleasantly surprised to find that just about everything seemed to work. It recognized my wi-fi, and I found I could use pen, mouse, and screen touch without any lag. I was able to see and hear videos on YouTube. And of course, GIMP ran. On a live DVD, GIMP took about 40 seconds to start (starting from an installation on my hard disk on my PC took under 5 seconds in Ubuntu Studio).

Back to Knoppix. As expected, the screen rotation key is not mapped. However, I can see no Linux program that does this. Postings to many fora on the topic go unanswered. There was one discussion on rotation with the Nvidia chipset, but the TM2 uses Intel for video, so I was out of luck. Since I need to rotate the screen frequently in my work, this has been the one limitation that has stopped me from using Linux on my laptops.

BASH prompts: Box-drawing characters

An xterm session with BASH prompts containing box-drawing characters. The rest of the screen is the output of repeated fortune commands.

I used to be a big user of xterm’s box drawing characters. I hadn’t been aware that they could be used in prompts.

But I recently heard a (probably dated) discussion on how box drawing characters could be used in a command prompt.

I think that’s a great idea, however, the big problem I found was to do it in a way that correctly turn off the drawing so that you could display text again. Otherwise a lot of text ends up looking garbled.

First, let me say that I used a “twtty” example code at Giles Orr’s BASH prompt website which I modified to allow actual box prompts. A clue was provided in the HOWTO here, where they showed, in a very brief way, the entire “catalogue” of “high-bit” ANSI characters, which I pasted into an xterm:

echo -e "�33(0abcdefghijklmnopqrstuvwxyz�33(B"

The high-bit characters are whatever you type in lowercase after you output the ANSI escape sequence “�33(0“, I needed the echo command (echo -e) to get that to work. The output of echo -e can be stored in a string like this:

local box1=`echo -e "�33(0qqqqqqqqqqqqqqqq�33(B"

�33(0 turns on ANSI escapes, while �33(B turns it off. The string “qqqq”… are the characters used to draw a horizontal line. There were some other tweaks I did to his code to become more complete in box characters for a two-line prompt, but it had the side effect of not going all the way across the screen like the original. Adding six characters fixed it, albeit in a kludgy kind of way:

ESC_IN=`echo -e "�33(0"`  # turn on box-drawing
ESC_OUT=`echo -e "�33(B"` # turn off box-drawing

function prompt_command {
    #   Find the width of the prompt:

    #   Add all the accessories below ...
    local l="${ESC_IN}l${ESC_OUT}"
    local m="${ESC_IN}m${ESC_OUT}"
    local temp="${l}-(${usernam}@${hostnam}:${cur_tty})---(${PWD})--"

    let fillsize=${TERMWIDTH}-${#temp}+6

What I mean by “kludgy” is that I simply added a “6” on the last line above which controls the number of characters required for the first line of the prompt to go across the screen. It’s unlikely that the terminal width will ever need to be less than 6.

I added two variables which are occasionally useful: $ESC_IN for turning on the ANSI feature, and $ESC_OUT for turning it off. Inside the function prompt_command, I added variables $l and $m since his code uses dashes and I wanted the ANSI horizontal lines instead. $l is for the ANSI output for the letter “l”; while $m is for the letter “m” in ANSI. These generate two corners of the box which occur on the far left of the prompt. And they do join up. The $l is used in the next statement below in the form of ${l} to begin making the string $temp. I could have done something with the dashes in this string such as use “${ESC_IN}qqq${ESC_OUT}” in place of “—“, but there were problems if I was too overzealous, so some dashes were left as is.

The main problem was to get a horizontal line in place of the string “——————————————————” which went on indefinitely. Those were replaced by lowercase “q” letters without the ANSI escapes. These were better placed in a statement nearby:

if [ "$fillsize" -gt "0" ]
    #   It's theoretically possible someone could need more 
    #   dashes than above, but very unlikely!  HOWTO users, 
    #   the above should be ONE LINE, it may not cut and
    #   paste properly

where $ESC_IN and $ESC_OUT were used in the next statement below the comments. You can’t put them inside the first $fill assignment, because the second assignment cuts off the end including $ESC_OUT should you attempt to do it that way.

Manjaro is still the best for my laptop

A popular distro is Mint. It’s a great distro, I have it installed on my home theatre. It’s great because all the apps are up-to-date (without being bleeding-edge and unstable), and I can chuck as many applications as I like on it with all the storage space my home theatre has. I have 55 days’ worth of music cued up on Brasero. But since I listen to classical, blues and jazz the most, that number comes down to 3 days’ worth of music frequently listened to. I have any public-domain documentaries and movies I can find. I shnagged the original Gilligan’s Island movie from They also have what many believe to be the worst movies of all time, such as Plan 9 From Outer Space (isn’t there an operating system called “Plan 9”?), a low-budget McCarthy-era sci-fi movie with bad acting and bad writing. The last time I watched Plan 9, I think I lasted 30 minutes before switching it off and moving on to other things. That’s a record for me. Don’t have it cued up to play on your first date with someone. And maybe not your second. I’ve been married 20 years, and my wife doesn’t know I have a copy of Plan 9 on video.

I was impressed with Mint, and kept it on my home theatre. It was Mint 14, and so I thought I would give it a second chance on my laptop. Same problems as before. Jumpy mouse which invokes click events when it randomly jumps somewhere. Wi-fi doesn’t work. Generally pretty bad.

I could have gone back to getting out my DVD of Manjaro 0.8.0, but instead I decided to download 0.8.2, the latest stable version. The desktop is improved incredibly, and it has an app I don’t recognise which resides on the desktop  The mouse is stable, recognizes pen and touch, and my WiFi works right down to the toggle switch on the front of my case. 0.8.3 promises to be even better, but since that’s beta right now, I’ll wait a while.

Manjaro is not for everyone, but I feel it is meant especially for people like me that don’t have much HD space (10GB is reserved for Linux and 2gigs for swap in my case), and sometimes need to have a second OS for their own reasons. Or maybe no reason.

Manjaro gets kudos for being the ideal small linux for my needs

After all is said and done, I have Manjaro running on TX2. But instead of running it on a USB, I’m running it from my hard drive, an SSD in this case. Manjaro is an offshoot of ArchLinux, but with intentions to be more user-friendly. Manjaro is new, having only reached version 0.8.0 by the end of August.

Wired, wireless, and mice of all descriptions (pen, touchpad, touchscreen) all work nicely, and fit inside of a 10GB partition I prepared for it. I gave it 2GB swap. When installing to such a small system, I didn’t waste time making additional partitions for /usr, /tmp, and /home like I always do. Instead, I just dumped the whole OS under /.

I have posted some nerdy and not-so-nerdy questions on their forums, and have been happy with their answers. From a person who comes from a traditional UNIX background (Solaris/SUN-OS, IBM, BSD, etc — LINUX came later for me, but the main distros still have a filesystem that follows FSSTND guidelines), there are some profoundly non-standard liberties that the Manjaro team took with the operating system’s design, and that is to funnel a good deal of /etc into /root. I am not sure of the benefit of that (I am guessing security would be the reason), but it does make it difficult for me, a UNIX nerd, to apply my knowledge of a typical UNIX filesystem. It appears as though most of the hundreds of files that are stored under /root consist of config files and password files.

Manjaro: Another one to add to my ratings scale

Manjaro is a new, user-friendly offshoot of Arch Linux, and promises to be simpler to install. I saw a video on exactly what was “easy” about it, and was sold. I burned myself an Xfce ISO image on a DVD, and the fact that the touchscreen, stylus, and touchpad on my HP-TX2 all worked made me almost wet myself. All right, I say, let’s commit this to USB, and we’ll worry about what doesn’t work later (which at this point was the camera, sound, and network printer detection — comparatively easy stuff, most often software-related).

That, along with the speed and ease of use (I think it is much noticeably easier to use as advertised, but don’t expect ease on the level of Mint or Ubuntu) added to its score.

So far, 13 points on the ratings scale. Since this scale is out of 18 if only I could get the rest of the stuff to work, it could have the remaining 5 points. But alas, I cannot give it to them. The USB failed to reboot properly, and I am considering trying again, or trying another image from the distro.

Mint and Puppy still tie for first, in all of their half-configured, clunky glory!


Tiny Linux Distros on an HP TX2 TouchSmart Laptop

As you know, I have been looking for an ideal distro for installation on to a USB stick. The biggest hurdle for the distros is to recognise my devices, which include a finger/pen touch screen, a mouse touchpad, stereo speakers, a webcam, stereo microphones, and a fingerprint scanner. Knowing the buttons used to flip the screen, mute and adjust volume wouldn’t hurt either. And oh yeah, Wi-Fi.

Yup, my laptop is pretty tricked out. And I don’t want to spend forever researching and finding out what proprietary drivers are needed for what devices, what to configure, and so on. There are just too many. I also can’t use my hard drive for the installation, since the SSD is too small. Thus, I am left with settling for installing on to a USB stick, and the OS must auto-detect and auto-install as many device drivers as possible before I have to actually dig in and configure things by hand.

I began with six candidates, but ended up with 11 candidates, since so many of them were, as I feared, feature-poor. I assigned a scoring system, and believe I have a reliable way (at least for me) of comparing how well the distros in question interact with my beast of a laptop.

Distro Score Comments
ArchBang Linux 4 This philosophy of this distro was premised on the idea that “I know what I’m doing”. That would mean that I know exactly what make/model all my devices are, and pretty much know exactly what modules to load and what to configure. If I was that keen on my computer, I would have installed Slackware instead of a relative unknown. That being said, I liked the desktop and its speed. All the points were awarded for speed, out of 5.
Puppy Linux (SlackPup) 14 It found my Wi-fi, but couldn’t configure it. It detected my Camera, offering me GUVCView. I had sound, it detected my disks and had icons for them on the desktop, detected my printer once my CAT5 was set up. It just didn’t detect my touchscreen and stylus. I still had my touchpad. For the most part I must say: nice puppy, nice puppy. Based on Slackware.
Puppy Linux (LucidPup) 14 This one is tied, but I found this ubuntu-based distro a little easier, and the desktop to be similar but with different icons. Both versions of Puppy were quite fast.
Lubuntu 8 Lubuntu fell short in a lot of areas. Couldn’t detect touch, pen, no sound, modest offering of office software, and middling in speed.
CrunchBang Linux 0 All I got was a desktop, no mouse. I could still use my keyboard to access programs, and that was about it.
Mint 14 Mint loaded and detected EVERYTHING, but at a huge cost of a clunky desktop that imposed a huge speed penalty. The mouse was not particularly well-behaved either.
When I say “everything”, I mean everything I was looking at as indicators: wireless, touch stylus, camera, sound ‘net printer detection, speed, ease of use. What I wasn’t looking at might be also important to many: screen does not flip on rotation, screen orientation not bound to the intended keys — but none of the Linux distros I tested or used in the past could do that.
TinyCore Linux 6 TinyCore (X/Wifi and Classic FLWM) detected very little, and had an interface similar to ArchBang Linux. OK for speed, but very little detected.
Damn Small Linux 0 Couldn’t even get X to work.
Vector Linux 0 May be a good, robust distro, but not on my computer. The Slackware-style character interface for configuring the video failed, as I could not use any keys from my keyboard to navigate the menus. My rating scale has no negative numbers, otherwise I would have factored in the fact that there was no live version offered, and I was forced to install to USB before trying it out. And after all this trouble (it took hours), I could not get past the video configuration, because I couldn’t navigate the character-based menu with my keyboard or mouse. It was a no-go.
Mint-Xfce 14 It also auto-detected everything like a pro, except the Wi-Fi toggle switch (so I can’t turn my Wi-Fi on – Only Windows 7 has been able to do that). The same mousing problems plague this distro as it did for the other Mint version, although there is a speed improvement.

All that said, despite the fact that Mint scored so high, and that Puppy Linux is a strong contender, and despite the fact that I am most seduced by Mint despite its slowness and erratic mouse touchpad (pen is better behaved), it looks as though there is no perfect distro available, and all of them will take some degree of work.