Compiling the linux kernel docs

In the last article, I said that compiling and installing source versions of software was akin to “going rogue”. I must confess that I have compiled from source and installed software that wasn’t in my distribution, most recently TexStudio, as being one of the larger projects, requiring tons of other libraries and whatnot to also be installed (or quite often, compiled from source on the side), since it wasn’t a part of the linux distro I was using at the time. It also wasn’t a part of Cygwin, and I compiled for that too. It was a great way to kill an afternoon.

But there was a time that I had compiled the kernel from source. It was necessary for me, as speed was an issue and I had slow hardware at the time. What I also had was a mixture of hardware pulled from different computers at different times. I researched specs on sound cards, network cards, video cards and the motherboard chipsets, and knew what specs to tweak on the kernel compilation dialogs, so I could get the kernel to do the right thing: which is to be fast and recognize all my hardware. I was doing this before the days of modules, with the version 1.x kernel. It worked, and it was noticeably faster than the stock kernels. X-Windows on my 80486 PC ran quite well with these compiled kernels, but was sluggish to the point of un-useable with a stock kernel running. Every few versions of the kernel, I would re-compile a new kernel for my PC, and pretty soon using the tcl/tk dialogs they had made things pretty easy, and I could answer all the questions from memory.

But then that all ended with version 2. Yes, I compiled a version 2 kernel from source, and yes, it ran OK. But it also had modules. The precompiled kernels were now stripped down and lean, and the modules would only be added as needed when the kernel auto-detected the presence of the appropriate hardware. After compiling a few times, I no longer saw the point from a performance standpoint, and today we are well into kernel version 5.3, and I haven’t compiled my own kernel for a very long time.

For the heck of it, I downloaded the 5.3 kernel, which uncompressed into nearly 1 gigabyte of source code. I studied the config options and the Makefile options, and saw that I could just run “make” to create only the documentation. So that’s what I did.

It created over 8,500 pages of documentation across dozens of PDF files. And 24 of them are zero-length PDFs, which presumably didn’t compile properly, otherwise the pagecount would have easily tipped the scales at 10,000. The pages were generated quickly, the 8,500 or more pages were generated with errors in about 3 minutes. The errors seemed to be manifest in the associated PDFs not showing up under the Documentation directory. I have a fast-ish processor, an Intel 4770k (a 4th generation i7 processor), which I never overclocked, running on what is now a fast-ish gaming motherboard (an ASUS Hero Maximus VI) with 32 gigs of fast-ish RAM. The compilation, even though it was only documentation, seemed to go screamingly fast on this computer, much faster than I was accustomed to (although I guess if I am using 80486’s and early Pentiums as a comparison …). The generated output to standard error of the LaTeX compilation was a veritable blur of underfull hbox’es and page numbers.

For the record, the pagecount was generated using the following code:

#! /bin/bash
list=`ls *.pdf`
tot=0
for i in $list ; do
        # if the PDF is of non-zero length then ...
        if [ -s "${i}" ] ; then 
                j=`pdfinfo ${i} | grep ^Pages`
                j=`awk '{gsub("Pages:", "");print}' <<< ${j}`
                # give a pagecount/filename/running total
                echo ${j}	    ${i}    ${tot}
                # tally up the total so far
                tot=$(($tot + $j))
        fi
done

echo Total page count: ${tot}

The next step for Linux development

As you might know, there are nearly 300 Linux distributions (currently 289– low in historical terms), and this is a testament to how successful the Linux kernel has become on the PC, as well as other devices, especially in relation to previously-existing *NIX systems, who have either fallen by the wayside, or are barely existing in comparison. A *NIX system that might be a distant second might be BSD UNIX.

Just earlier today, I observed that for the installation of TexStudio, for instance, there are two installation images for MS-Windows (all versions from windows 7 on up), the only distinction being between 32 and 64-bit. On the other hand, there were a plethora of Linux images, all depending on which distro of Linux you used. My distro is Ubuntu Studio. I use Gnome as the window manager. The only Ubuntu-based Linux images were for xUbuntu (which uses xfce as a window manager).

Apparently, it also seems necessary to have to compile a separate image each time a linux distro is upgraded. The 19 images I counted for xUbuntu were for versions 14 through to 19. Now, I understand that seperate images need to be compiled for different processors, but most of these are for PC’s running with 32 or 64-bit processors. The same was true for each upgrade of Debian, or Fedora, or OpenSuse. And even then, they needed separate binaries from each other. There are easily more than 50 Linux-based installation images you can choose from at the moment.

The “package” system that is now near universal in the Linux environment provides a convenient way for sysops to assure themselves that installations can happen without problems happening to the system. Before that, one compiled most new software from source, tweaking system variables and modifying config files to conform to whatever you had in your system. This has since become automated with “make –configure” or “make config” that most source has these days. In other words, modernization of Linux seems to mean increasing levels of abstraction, and increasing levels of trust that the judgement of a “make config” trumps human judgement of what needs configuring for the source to compile. On a larger scale, trusting a package manager over our own common sense can be seen as working “most of the time”, so there is a temptation to be lazy and just find something else to install besides whatever our first choice was in case the installation failed due to a package conflict. Installing software by compiling from source, once seen as the rite of passage of any sensible Linux geek, is now seen as “going rogue”, since that is now seen as subverting the package manager, and in a sense, taking the law into your own hands.

Of course, Linux installations still exist for the latter kind of Linux user. The foremost, in my opinion, is Slackware (if you screw up, at least something will run) and a close second is Arch Linux. It is my understanding that Arch Linux requires much more knowledge of your own hardware in order to even boot the system; whereas Slackware will be likely to at least boot if your knowledge of the hardware is not quite so keen (but still keen). My experience with Slackware is in the distant past, so I am not sure what is the norm these days, although I understand they still use tarballs, which I remember allowed me to play with the installation by un-compressing it in a directory tree not intended for the installation to see what was inside before I committed myself to deciding whether it could be installed. The tarballs are compressed nowadays with “xz” compression, giving the files a “.txz” extension.

But I digress. Getting back to installation images, it should not be too difficult for people who manage these linux distros to make it less necessary to have so many different images for the same Linux distribution. In the MS-Windows example, only one version of TexStudio was needed across three or four different Windows versions. I am running windows 7 with software that didn’t exist in the days of Windows 7, and with other software that originated from Windows 2000. All of it still runs, and runs quite well. Fixing this problem is hopefully do-able in the near future.

The greatest advance in computer technology

tx-2
HP TX-2 Bought in 2007, it sported 4 USB ports, a DVD drive, two expansion ports, a VGA port, and an IR remote control.

My three HP laptops I have serve as latter-day museum pieces of how technology has progressed. I am not trying to slag Hewlett-Packard. I like their printers, and despite their reputation, I also like their laptops. Today, I am mentioning them as a microcosm of how technology has progressed. What can be said about HP can be said across the industry. HP is nothing special in this regard. These are all full laptops with attached keyboards. They all have rotating displays with a webcam, onboard stereo mikes, stereo speakers, a touchscreen and a mousepad. Also, it is fair to say all of these laptops were purchased used (saving nearly a thousand dollars apiece off the prices when new), but all have been fully functional from the first day, and are still functional.

As you read the captions on each successive illustration going from top to bottom, what I don’t mention is that, of course, video is more advanced; and the last laptop, the Elitebook is, in my experience, the first to offer an internal SSD out of the box. The Elitebook also has nowhere near the heat problems suffered by my TX2.

HP TM-2 Bought in 2010, it removed one USB port, and one of the expansion ports. It also removed the DVD ROM and no longer has an IR remote. It also removed the VGA but replaced it with an HDMI port.

But these advances are small compared to the greatest advance the progression of these laptops show: the elimination of major features, and the marketing effort on the part of computer companies that this is a “good thing”. By the time we get to the Elitebook, we no longer have a DVD drive, and have eliminated half of our USB ports. Neither of the two USB ports that remain are USB3, either. Not mentioned in the captions, are the elimination of the spare headphone jack, and the microphone jack. The combination mike/headphone jack on the Elitebook won’t support actual microphones, supporting instead, perhaps, mikes built into the headset. My headset uses a USB connection, and wouldn’t require an eighth-inch jack connection. But microphone support is terrible, making the built-in mikes your only good option.

Elitebook Revolve 810
HP EliteBook Revolve 810 Bought in 2015, it lacks all of the features the TM-2 was lacking, except one less USB port, and no touch pen. It also has neither VGA nor HDMI, but it has something touted to be a “dual display port” which fits nothing on any equipment I have, but can be converted to HDMI with the right adapter cable. It is whisper-quiet, partly because the speakers are not that great.

One thing (out of many other reasons) that motivated me not to get rid of my two older laptops is the one reason anyone would buy a convertible tablet in the first place: apart from using the screen for direct windows navigation, you can also write documents in your own handwriting, or make drawings freehand on the tablet screen. I do make use of this feature, and found to my horror that the Elitebook has really terrible support of freehand writing and drawing. The other two actually have pretty good support, and it was a great disappointment to see this feature lacking in the Elitebook despite a faster CPU and graphics processor. Apart from not having a stylus, the craggy way it renders drawings of straight lines when you do use a stylus – and even if you use a ruler – has been well documented in many other blogs and video reviews.

But even with the HP EliteBook, Apple and Google have gone even further over the deep end with elimination of features, with consumers willing to pay more for equipment that can do less. It is a marketer’s wet dream, made manifest in reality. Who needs a keyboard at all, or any external connectors? Use Bluetooth for all your peripherals (nowadays, the keyboard is a peripheral), and “the cloud” as your external hard drive. And still, these pieces of crippled hardware are so popular, they almost sell themselves. Having only bluetooth restricts flexibility, since a peripheral that doesn’t use bluetooth, such as a USB drive, is no longer an option for owners of these devices. To store, I would only have “the cloud”, and I would have to hope I would have free internet access everywhere I go in order to access my data. It is quite possible that users who rely on cloud storage are paying monthly for their internet connection, and paying monthly again for “cloud” storage. Of course Apple, Google and Microsoft are happy to provide cloud services so you can store as much data as possible, and to autosave your documents in the cloud to maximize your use of their cloud services.

What is “universal” about USB?

I am one of those hopeless romantics who believe that words must mean something. “Universal” is quite a stong word when used, and its all-encompassing reach implies that it is good for … well, everything. As in the whole universe which contains that thing.

USB.org lists at least 18 connectors according to “device class”, few of which you would consider interchangeable with another. I have seen, for example radically differently shaped portable hard drive connectors over the years (at least 3 kinds) that all say they are USB, and all illustrated in the photo montage provided here. They would never be considered interchangeable.

Perhaps by “universal”, USB.org (homepage of the USB implementers forum) just means that this is another attempt to apply industry standards to an understandably chaotic computer industry. “Universal” invites mental images of “one connector-fits-all”, and we can see that can’t be the case, and it is pretty much impossible given the data needs of different devices. It appears to be an attempt to eliminate or reduce proprietary connectors, which are often made by one manufacturer for one device, and never seen again in the next model year, by any manufacturer. It is a way for a consortium of manufacturers to agree “OK, if we want to advertise USB on our products, we have to pick from this or that set of connectors to sport the USB logo on our package.”

I notice that among many of the predictable companies represented in the consortium (Intel, HP, and a plethora of small corporations and manufacturers numbering in the thousands), Apple is also on the board of directors. Apple, the current reigning king of consumer lock-in has allowed their proprietary connectors to be made by anyone. I bought one at a gas station — it works surprisingly well. It consists of a USB main cord ending with a micro USB connector, over which I can fit a (Apple) lightning connector attachment and have it both ways. I can charge and transfer data to and from my iPad with it.

Again, romantic old me talking here, but if I lose or damage a USB connector, I should be able to find replacement connectors at any electronics shop. In reality though I don’t expect stores to sell all 18 or so different kinds of cable. But I also should not be forced to send off to the manufacturer of my device for one, often at exhorbitant cost, which is what I think the consortium was trying to avoid.