Compiling the linux kernel docs

In the last article, I said that compiling and installing source versions of software was akin to “going rogue”. I must confess that I have compiled from source and installed software that wasn’t in my distribution, most recently TexStudio, as being one of the larger projects, requiring tons of other libraries and whatnot to also be installed (or quite often, compiled from source on the side), since it wasn’t a part of the linux distro I was using at the time. It also wasn’t a part of Cygwin, and I compiled for that too. It was a great way to kill an afternoon.

But there was a time that I had compiled the kernel from source. It was necessary for me, as speed was an issue and I had slow hardware at the time. What I also had was a mixture of hardware pulled from different computers at different times. I researched specs on sound cards, network cards, video cards and the motherboard chipsets, and knew what specs to tweak on the kernel compilation dialogs, so I could get the kernel to do the right thing: which is to be fast and recognize all my hardware. I was doing this before the days of modules, with the version 1.x kernel. It worked, and it was noticeably faster than the stock kernels. X-Windows on my 80486 PC ran quite well with these compiled kernels, but was sluggish to the point of un-useable with a stock kernel running. Every few versions of the kernel, I would re-compile a new kernel for my PC, and pretty soon using the tcl/tk dialogs they had made things pretty easy, and I could answer all the questions from memory.

But then that all ended with version 2. Yes, I compiled a version 2 kernel from source, and yes, it ran OK. But it also had modules. The precompiled kernels were now stripped down and lean, and the modules would only be added as needed when the kernel auto-detected the presence of the appropriate hardware. After compiling a few times, I no longer saw the point from a performance standpoint, and today we are well into kernel version 5.3, and I haven’t compiled my own kernel for a very long time.

For the heck of it, I downloaded the 5.3 kernel, which uncompressed into nearly 1 gigabyte of source code. I studied the config options and the Makefile options, and saw that I could just run “make” to create only the documentation. So that’s what I did.

It created over 8,500 pages of documentation across dozens of PDF files. And 24 of them are zero-length PDFs, which presumably didn’t compile properly, otherwise the pagecount would have easily tipped the scales at 10,000. The pages were generated quickly, the 8,500 or more pages were generated with errors in about 3 minutes. The errors seemed to be manifest in the associated PDFs not showing up under the Documentation directory. I have a fast-ish processor, an Intel 4770k (a 4th generation i7 processor), which I never overclocked, running on what is now a fast-ish gaming motherboard (an ASUS Hero Maximus VI) with 32 gigs of fast-ish RAM. The compilation, even though it was only documentation, seemed to go screamingly fast on this computer, much faster than I was accustomed to (although I guess if I am using 80486’s and early Pentiums as a comparison …). The generated output to standard error of the LaTeX compilation was a veritable blur of underfull hbox’es and page numbers.

For the record, the pagecount was generated using the following code:

#! /bin/bash
list=`ls *.pdf`
tot=0
for i in $list ; do
        # if the PDF is of non-zero length then ...
        if [ -s "${i}" ] ; then 
                j=`pdfinfo ${i} | grep ^Pages`
                j=`awk '{gsub("Pages:", "");print}' <<< ${j}`
                # give a pagecount/filename/running total
                echo ${j}	    ${i}    ${tot}
                # tally up the total so far
                tot=$(($tot + $j))
        fi
done

echo Total page count: ${tot}

The next step for Linux development

As you might know, there are nearly 300 Linux distributions (currently 289– low in historical terms), and this is a testament to how successful the Linux kernel has become on the PC, as well as other devices, especially in relation to previously-existing *NIX systems, who have either fallen by the wayside, or are barely existing in comparison. A *NIX system that might be a distant second might be BSD UNIX.

Just earlier today, I observed that for the installation of TexStudio, for instance, there are two installation images for MS-Windows (all versions from windows 7 on up), the only distinction being between 32 and 64-bit. On the other hand, there were a plethora of Linux images, all depending on which distro of Linux you used. My distro is Ubuntu Studio. I use Gnome as the window manager. The only Ubuntu-based Linux images were for xUbuntu (which uses xfce as a window manager).

Apparently, it also seems necessary to have to compile a separate image each time a linux distro is upgraded. The 19 images I counted for xUbuntu were for versions 14 through to 19. Now, I understand that seperate images need to be compiled for different processors, but most of these are for PC’s running with 32 or 64-bit processors. The same was true for each upgrade of Debian, or Fedora, or OpenSuse. And even then, they needed separate binaries from each other. There are easily more than 50 Linux-based installation images you can choose from at the moment.

The “package” system that is now near universal in the Linux environment provides a convenient way for sysops to assure themselves that installations can happen without problems happening to the system. Before that, one compiled most new software from source, tweaking system variables and modifying config files to conform to whatever you had in your system. This has since become automated with “make –configure” or “make config” that most source has these days. In other words, modernization of Linux seems to mean increasing levels of abstraction, and increasing levels of trust that the judgement of a “make config” trumps human judgement of what needs configuring for the source to compile. On a larger scale, trusting a package manager over our own common sense can be seen as working “most of the time”, so there is a temptation to be lazy and just find something else to install besides whatever our first choice was in case the installation failed due to a package conflict. Installing software by compiling from source, once seen as the rite of passage of any sensible Linux geek, is now seen as “going rogue”, since that is now seen as subverting the package manager, and in a sense, taking the law into your own hands.

Of course, Linux installations still exist for the latter kind of Linux user. The foremost, in my opinion, is Slackware (if you screw up, at least something will run) and a close second is Arch Linux. It is my understanding that Arch Linux requires much more knowledge of your own hardware in order to even boot the system; whereas Slackware will be likely to at least boot if your knowledge of the hardware is not quite so keen (but still keen). My experience with Slackware is in the distant past, so I am not sure what is the norm these days, although I understand they still use tarballs, which I remember allowed me to play with the installation by un-compressing it in a directory tree not intended for the installation to see what was inside before I committed myself to deciding whether it could be installed. The tarballs are compressed nowadays with “xz” compression, giving the files a “.txz” extension.

But I digress. Getting back to installation images, it should not be too difficult for people who manage these linux distros to make it less necessary to have so many different images for the same Linux distribution. In the MS-Windows example, only one version of TexStudio was needed across three or four different Windows versions. I am running windows 7 with software that didn’t exist in the days of Windows 7, and with other software that originated from Windows 2000. All of it still runs, and runs quite well. Fixing this problem is hopefully do-able in the near future.

The greatest advance in computer technology

tx-2
HP TX-2 Bought in 2007, it sported 4 USB ports, a DVD drive, two expansion ports, a VGA port, and an IR remote control.

My three HP laptops I have serve as latter-day museum pieces of how technology has progressed. I am not trying to slag Hewlett-Packard. I like their printers, and despite their reputation, I also like their laptops. Today, I am mentioning them as a microcosm of how technology has progressed. What can be said about HP can be said across the industry. HP is nothing special in this regard. These are all full laptops with attached keyboards. They all have rotating displays with a webcam, onboard stereo mikes, stereo speakers, a touchscreen and a mousepad. Also, it is fair to say all of these laptops were purchased used (saving nearly a thousand dollars apiece off the prices when new), but all have been fully functional from the first day, and are still functional.

As you read the captions on each successive illustration going from top to bottom, what I don’t mention is that, of course, video is more advanced; and the last laptop, the Elitebook is, in my experience, the first to offer an internal SSD out of the box. The Elitebook also has nowhere near the heat problems suffered by my TX2.

HP TM-2 Bought in 2010, it removed one USB port, and one of the expansion ports. It also removed the DVD ROM and no longer has an IR remote. It also removed the VGA but replaced it with an HDMI port.

But these advances are small compared to the greatest advance the progression of these laptops show: the elimination of major features, and the marketing effort on the part of computer companies that this is a “good thing”. By the time we get to the Elitebook, we no longer have a DVD drive, and have eliminated half of our USB ports. Neither of the two USB ports that remain are USB3, either. Not mentioned in the captions, are the elimination of the spare headphone jack, and the microphone jack. The combination mike/headphone jack on the Elitebook won’t support actual microphones, supporting instead, perhaps, mikes built into the headset. My headset uses a USB connection, and wouldn’t require an eighth-inch jack connection. But microphone support is terrible, making the built-in mikes your only good option.

Elitebook Revolve 810
HP EliteBook Revolve 810 Bought in 2015, it lacks all of the features the TM-2 was lacking, except one less USB port, and no touch pen. It also has neither VGA nor HDMI, but it has something touted to be a “dual display port” which fits nothing on any equipment I have, but can be converted to HDMI with the right adapter cable. It is whisper-quiet, partly because the speakers are not that great.

One thing (out of many other reasons) that motivated me not to get rid of my two older laptops is the one reason anyone would buy a convertible tablet in the first place: apart from using the screen for direct windows navigation, you can also write documents in your own handwriting, or make drawings freehand on the tablet screen. I do make use of this feature, and found to my horror that the Elitebook has really terrible support of freehand writing and drawing. The other two actually have pretty good support, and it was a great disappointment to see this feature lacking in the Elitebook despite a faster CPU and graphics processor. Apart from not having a stylus, the craggy way it renders drawings of straight lines when you do use a stylus – and even if you use a ruler – has been well documented in many other blogs and video reviews.

But even with the HP EliteBook, Apple and Google have gone even further over the deep end with elimination of features, with consumers willing to pay more for equipment that can do less. It is a marketer’s wet dream, made manifest in reality. Who needs a keyboard at all, or any external connectors? Use Bluetooth for all your peripherals (nowadays, the keyboard is a peripheral), and “the cloud” as your external hard drive. And still, these pieces of crippled hardware are so popular, they almost sell themselves. Having only bluetooth restricts flexibility, since a peripheral that doesn’t use bluetooth, such as a USB drive, is no longer an option for owners of these devices. To store, I would only have “the cloud”, and I would have to hope I would have free internet access everywhere I go in order to access my data. It is quite possible that users who rely on cloud storage are paying monthly for their internet connection, and paying monthly again for “cloud” storage. Of course Apple, Google and Microsoft are happy to provide cloud services so you can store as much data as possible, and to autosave your documents in the cloud to maximize your use of their cloud services.

What is “universal” about USB?

I am one of those hopeless romantics who believe that words must mean something. “Universal” is quite a stong word when used, and its all-encompassing reach implies that it is good for … well, everything. As in the whole universe which contains that thing.

USB.org lists at least 18 connectors according to “device class”, few of which you would consider interchangeable with another. I have seen, for example radically differently shaped portable hard drive connectors over the years (at least 3 kinds) that all say they are USB, and all illustrated in the photo montage provided here. They would never be considered interchangeable.

Perhaps by “universal”, USB.org (homepage of the USB implementers forum) just means that this is another attempt to apply industry standards to an understandably chaotic computer industry. “Universal” invites mental images of “one connector-fits-all”, and we can see that can’t be the case, and it is pretty much impossible given the data needs of different devices. It appears to be an attempt to eliminate or reduce proprietary connectors, which are often made by one manufacturer for one device, and never seen again in the next model year, by any manufacturer. It is a way for a consortium of manufacturers to agree “OK, if we want to advertise USB on our products, we have to pick from this or that set of connectors to sport the USB logo on our package.”

I notice that among many of the predictable companies represented in the consortium (Intel, HP, and a plethora of small corporations and manufacturers numbering in the thousands), Apple is also on the board of directors. Apple, the current reigning king of consumer lock-in has allowed their proprietary connectors to be made by anyone. I bought one at a gas station — it works surprisingly well. It consists of a USB main cord ending with a micro USB connector, over which I can fit a (Apple) lightning connector attachment and have it both ways. I can charge and transfer data to and from my iPad with it.

Again, romantic old me talking here, but if I lose or damage a USB connector, I should be able to find replacement connectors at any electronics shop. In reality though I don’t expect stores to sell all 18 or so different kinds of cable. But I also should not be forced to send off to the manufacturer of my device for one, often at exhorbitant cost, which is what I think the consortium was trying to avoid.

A brief note on Pythagorean Triples

And I decided today to share what I learned about an algorithm for generating Pythagorean triples for any

*** QuickLaTeX cannot compile formula:
m

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

and

*** QuickLaTeX cannot compile formula:
n

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

, where

*** QuickLaTeX cannot compile formula:
 m, n \in 

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

Z. A Pythagorean triple are any three whole numbers which satisfy the equation

*** QuickLaTeX cannot compile formula:
a^2 + b^2 = c^2

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

. Let

*** QuickLaTeX cannot compile formula:
a = m^2 - n^2

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

;

*** QuickLaTeX cannot compile formula:
b = 2 m n

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

, and you will obtain a solution to the relation

*** QuickLaTeX cannot compile formula:
a^2 + b^2 = c^2

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

. It is therefore not that hard, if we allow

*** QuickLaTeX cannot compile formula:
m

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

and

*** QuickLaTeX cannot compile formula:
n

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

to be any numbers from 1 to 100, and

*** QuickLaTeX cannot compile formula:
m \ne n

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

, to write a computer program to generate the first 9800 or so Pythagorean triples, allowing for negative values for

*** QuickLaTeX cannot compile formula:
a

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

or

*** QuickLaTeX cannot compile formula:
b

*** Error message:
Fatal Package fontspec Error: The fontspec package requires either XeTeX or
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}
Emergency stop.
leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}

.

Facebook bots apparently make their own language

From 2001: A Space Odyssey (1968)

Like a scene from Stanley Kubrick’s 2001: A Space Odyssey, computers are now seemingly taking matters into their own hands and possibly overthrowing their human overlords.

Many news outlets are telling us that Facebook bots can talk to each other in a language they are making up on their own. Some news outlets appear convinced that this communication is real. Even fairly respectable news outlets such as Al Jazeera are suggesting the proverbial sky is falling. However, they fall short of speculating that the Facebook bots are plotting against us.

While Facebook pulled the plug on the encoded “conversation” (which on inspection was repetitive gibberish along with repetitive responses), one half-expected the bots to try and prevent the operators from turning them off somehow. Maybe by disabling Control+C or something. Maybe they were plotting to prevent the human operator from pulling the plug from the wall.

What Facebook was experimenting with was something called an “End-to-End” negotiator, the source code of which is available to everyone on GitHub. Far from being a secret experiment, it was based on a very public computer program written in Python whose source code anyone could download and play with themselves on a Python interpreter, which is also freely available for most operating systems. And to greatly aid the confused programmer, the code was documented in some detail. Just to make sure everyone understands it, what it does, and how to make it talk to other instances of the same program.

They were discussing something, but no one knows what. There are news stories circulating around that they gerrymandered the english words to become more efficient to themselves, but I am going to invoke Ocham’s Razor and assume, until convinced otherwise — that this was a bug, the bots were braindead, and the world is safe from plotting AI bots.

For now.

Exploring Thales’ Theorem

I was playing with a geometry software package and decided to explore Thales Theorem.

The theorem states that for any diameter line drawn through the circle with endpoints B and C on the circle (obviously passing through the circle’s center point), any third non-collinear point A on the circle can be used to form a right angle triangle. That is, no matter where you place A on the circle, the angle BAC is always a right angle. Most places I have read online stop there.

There was one small problem on my software. Since constructing this circle meant that the center point was already defined on my program, there didn’t seem to be a way to make the center point part of the line, except by manipulating the mouse or arrow keys. So, as a result, my angle ended up being slightly off: 90.00550^{\circ} was the best I could do. But then, I noticed something else: No matter where point A was moved from then on, the angle would stay exactly the same, at 90.00550^{\circ}.

Now, 90.00550^{\circ} is not a right angle. Right angles have to be exactly 90^{\circ} or go home. If it’s not a right angle, then Thales’ theorem should work for any angle.

Why not restate the theorem for internal angles in the circle a little more generally then?

For any chord with endpoints BC in the circle, and a point A in the major arc of the circle, all angles \angle BAC will all equal some angle \theta. For points A in the minor arc, all angles will be equal to 180^{\circ} - \theta.

Note that BC is the desired chord, making the arc containing point A the major arc, with the small arc in the lower part of the circle the minor arc. As shown, all angles in the major arc are about 30.3 degrees.

So, now the limitations of my software are unimportant. In the setup shown on the left, the circle contains the chord BC, and A lies in the major arc, forming an angle \angle BAC = 30.29879^{\circ}. If A lay in the minor arc, the angle would have been 180^{\circ} - 30.29879^{\circ} = 149.70121^{\circ}.

By manipulating BC, you can obtain any angle \angle BAC you like, so long as \angle BAC < 180^{\circ}. More precisely, all angles in the minor arc drawn in the manner previously described will be 90^{\circ} < \angle BAC < 180^{\circ}, and all angles in the major arc will tend to be: 0^{\circ} < \angle BAC < 90^{\circ}. If the chord is actually the diameter line of the circle, then \angle BAC = 90^{\circ} exactly.

Gnome, a tale of a dead fail whale with a happy ending …

I moved my window manager from xfce to gnome today, and spent most of the day so far getting gdm3 to work. For a while, I was using two window managers, then narrowed it down to gdm3 and uninstalled the other one.

The login manager failed to come up, and for most of this morning I was stuck in a character console. In gnu/linux, strange things happen when you read a lot of documentation and error messages. I began to see artifacts that are in themselves hilarious, although after hours of poring through debug messages and error messages, I first thought I needed a long break. But no. The same phrase can be google’d, and others have reported seeing it, thus confirming my strange experience.

The error I saw was

We failed, but the fail whale is dead. Sorry.

So, what on Earth is a “fail whale”? It appears to mean that a part of the server that issues error messages, has died. Apparently, gdm3 itself didn’t die, since running ps showed that it was still running, although not running a login screen.

It turns out that the “fail whale” was a meme created by someone named Yiying Liu to refer to errors reported by Twitter. I guess I missed out on that meme.

Somewhere in the thicket of error and debug messages was a reference to the fact that /usr/share/gnome-sessions/sessions/ubuntu.session did not exist. I went to that location as root, and symlinked gnome.session to ubuntu.session.

ln -s gnome.session ubuntu.session

That appeared to be all that was needed. I was able to log on to a gnome desktop.

BoUoW: Bash on Ubuntu on Windows

bash_tuxsay_uobow_sysinfo
Tux is telling you the most current Ubuntu running for Windows for BoUoW.

I am not proud of possibly inventing the ugly acronym “BOUOW”, but “BASH on Ubuntu on Windows” appears to compel it. Maybe we can pronounce it “bow-wow” — not sure if that’s complementary. Just did a Google search, and, no, as predicted I couldn’t have invented it: It is variously acronymed: B.O.U.O.W., or BoUoW. It has been around since at least March of 2016, giving end users, computer geeks, and developers plenty of time to come up with something of a nickname or acronym.

But I actually mean to praise BoUoW, and to give it considerably high praise. This is a brave move on Microsoft’s part, and a long time coming. MS has made *NIX access available in its kernel for some time now, thus making *NIX conventions possible on the command line like certain commands in the Power Shell. The user has to enable the capability in the Windows 10 settings (“Windows Subsystem for Linux” (WSL)), and as Admin, the kernel has to be set to “Developer mode”, and follow the instructions on the MSDN website to download binaries and to enable a bash shell on either the command line or PowerShell.

BoUoW takes advantage of the WSL to do impressive things like use the same network stack as Windows 10 itself. This is because with WSL enabled, a UNIX command such as SSH can now make calls directly to the Windows 10 kernel to access the network stack.

This is, by Microsoft’s admission, a work in progress. It would worry me if they would not have said that. But lots of things do work. vi works and is symlinked (or somehow aliased) to vim. The bash shell comes with some other common aliases like “ll” for “ls -l”, for instance, and apparently, as part of the installation, you actually have a miniature version of Ubuntu, complete with a C compiler, and an image of Ruby, Perl, Python, and if it isn’t installed, you can always use “apt-get” to install it.

One of the security features has the disadvantage of conducting an install of BoUoW separately for each user. If a user types “bash” in a cmd window, and if BoUoW is not installed for that user, the install happens all over again, and the image goes under each user’s AppData directory requesting a BoUoW install. If you are using an SSD for C: drive like me, then you might find that limiting due to a shortage of space.

There are many things not recommended yet. If you are a serious web developer, for example, you would find many of the things you want, such as mySQL, are not currently working the right way. If you are a systems programmer, then you’ll find that ps and top only work for unix-like commands, so I wouldn’t use BoUoW for any serious process management. That being said, it does contain the old standbys: grep, sed, and awk.

gcc_output_uobow
The compiling and output of my “Hello, world!” program, also showing the source code.

gcc had to be installed separately. The binary it created for my “Hello, world!” program lacks the Microsoft .exe extension. And as it is for Unix binaries, it lacks any default extension. It is using gcc version 4.8.4. The current version is 6.3. This older gcc usually won’t pose a problem for most users.

The current stable Ubuntu is 16.04. BoUoW uses the previous stable version, 14.04, and thus has slightly older versions of Perl (5.18), Python (2.7.6), bash (4.3.11), Ruby (1.8) (available using apt-get), vim (7.4), and other software. Vim, however, appears to be the “large” version, which is expandable, using plugins like Vundle, which is good news. I don’t suspect that these slightly older versions would cause anyone problems, except possibly for Python, which has gone all the way up to version 3.5.2 since. You are warned also that it is possible that under Python or Perl, you might run into problems due to not all of their libraries running correctly under BoUoW. Not surprising, since Python has hundreds of installable libraries and Perl has thousands of them. Could take a while.

 

Another crack at 6×6 magic squares

Even-ordered magic squares are not difficult just because they are even, in my opinion. They are difficult to design because their order is composite. My experience has shown that by far the easiest to design are magic squares whose order is a prime number like 5, 7, 11, or 13. I have run into similar problems with 9×9, 15×15, as well as 6×6 and 8×8. The 6×6 seems to have the reputation for being the most difficult to make magic, although I have stumbled on one system that produced them, and wrote about it a few years ago, about how I applied that method to a spreadsheet. That method, however, led only to 64 possibilities.

Spreadsheets are a great way of checking your progress as you are building such squares, especially when you are trying to build a square using, say, a method you made up on your own, such as applying a Knight’s tour (which works OK with an order-8 square) to an order-6 square. This would require some facility with using spreadsheet formulae and other features which improve efficiency. Using your own method is very much based on trial and error, and you have to make a rule as to whether you will be wrapping the Knight’s moves (if you decide to use a Knight’s tour) to the opposite side of the board, or will you be keeping your moves within the board limits, changing direction of the “L’s” in your movements (this seems to lead to dead ends as you find you have no destinations left which follow an “L”, and consequently, squares which are not really magic). At any rate, the best squares follow some kind of rule which you need to stick to once you make it.

The best ones I have been able to make with a knights tour are: 1) when your L’s are all in the same “direction”‘ 2) when you sum up two squares. The problem is, all of the ones I have made with these methods so far either end up with weak magic (rows add up but not the columns) but the numbers 1-36 are all there; or all rows and columns make the magic number of 111, yet not all of the numbers are present and there are several duplicated (and even triplicated) numbers.

1 9 17 24 28 32 111
26 36 4 7 15 23 111
17 19 27 32 6 10 111
34 2 12 17 19 27 111
21 29 31 4 8 18 111
12 16 20 27 35 1 111
132 111 111 111 111 111 111 90

The above table shows the totals for the rows and columns for one attempt I made for a semi-magic square. Rows and columns work out to the correct total, but not the diagonals, as shown by the yellowed numbers. There are also duplicate entries, as well as missing entries. 1, 4, 12, 19, and 32 have duplicates, while there are three of 17 and of 27. Numbers missing are 3, 5, 11, 14, 22, 25, 30, and 33. That being said, the rows and columns add perfectly to 111, but not the diagonals. However, the average of the diagonals is the magic number 111 (this does not always work out). The sum of the missing numbers is 156, while the sum of the “excess” numbers (the sum of the numbers that occur twice plus double the sum of the numbers occurring thrice) is also 156 (could be a coincidence).

The above semi-magic square results from the sum of two squares where a knight’s tour is performed with the second square where the numbers 1 to 6 go in random order going down from top to botton. If I am too close to the bottom edge of a column, the knight’s tour wraps back to the top of the square. Beginning on the third column, I shift the next entry one extra square downward. The result is 6 of each number, each of these unique to its own row and column.

The first square are the multiples of 6 from 0 to 30 in random order going from left to right, also in a knight’s tour, wrapping from right to left and continuing. The third row is shifted by 1 to the right.

0 6 12 18 24 30
24 30 0 6 12 18
12 18 24 30 0 6
30 0 6 12 18 24
18 24 30 0 6 12
6 12 18 24 30 0
+
1 3 5 6 4 2
2 6 4 1 3 5
5 1 3 2 6 4
4 2 6 5 1 3
3 5 1 4 2 6
6 4 2 3 5 1

Note that the first square wasn’t really randomized.  When I tried to randomize it, the result was still semi-magic, similar to what was described. In the case I attempted, the average of the diagonals was not 111. The two are added, this time using actual matrix addition built into Excel. There is a “name box” above and at the far left of the application below the ribbon but above the spreadsheet itself. This is where you can give a cell range a name. I highlighted the first square with my mouse, and in the name box I gave a unique name like “m1x” (no quotes). The second was similarly selected and called “m2x”. I prefer letter-number-letter names so that the spreadsheet does not confuse it with a cell address (which it will). Then I selected a 6×6 range of empty cells on the spreadsheet and in the formula bar (not in a cell) above the spreadsheet (next to the name box), I entered =m1x+m2x, then I pressed CTRL+ENTER. The range of empty cells I selected is now full with the sum of the squares m1x and m2x, which is the first semi-magic square shown in this article.