# Compiling the linux kernel docs

In the last article, I said that compiling and installing source versions of software was akin to “going rogue”. I must confess that I have compiled from source and installed software that wasn’t in my distribution, most recently TexStudio, as being one of the larger projects, requiring tons of other libraries and whatnot to also be installed (or quite often, compiled from source on the side), since it wasn’t a part of the linux distro I was using at the time. It also wasn’t a part of Cygwin, and I compiled for that too. It was a great way to kill an afternoon.

But there was a time that I had compiled the kernel from source. It was necessary for me, as speed was an issue and I had slow hardware at the time. What I also had was a mixture of hardware pulled from different computers at different times. I researched specs on sound cards, network cards, video cards and the motherboard chipsets, and knew what specs to tweak on the kernel compilation dialogs, so I could get the kernel to do the right thing: which is to be fast and recognize all my hardware. I was doing this before the days of modules, with the version 1.x kernel. It worked, and it was noticeably faster than the stock kernels. X-Windows on my 80486 PC ran quite well with these compiled kernels, but was sluggish to the point of un-useable with a stock kernel running. Every few versions of the kernel, I would re-compile a new kernel for my PC, and pretty soon using the tcl/tk dialogs they had made things pretty easy, and I could answer all the questions from memory.

But then that all ended with version 2. Yes, I compiled a version 2 kernel from source, and yes, it ran OK. But it also had modules. The precompiled kernels were now stripped down and lean, and the modules would only be added as needed when the kernel auto-detected the presence of the appropriate hardware. After compiling a few times, I no longer saw the point from a performance standpoint, and today we are well into kernel version 5.3, and I haven’t compiled my own kernel for a very long time.

For the heck of it, I downloaded the 5.3 kernel, which uncompressed into nearly 1 gigabyte of source code. I studied the config options and the Makefile options, and saw that I could just run “make” to create only the documentation. So that’s what I did.

It created over 8,500 pages of documentation across dozens of PDF files. And 24 of them are zero-length PDFs, which presumably didn’t compile properly, otherwise the pagecount would have easily tipped the scales at 10,000. The pages were generated quickly, the 8,500 or more pages were generated with errors in about 3 minutes. The errors seemed to be manifest in the associated PDFs not showing up under the Documentation directory. I have a fast-ish processor, an Intel 4770k (a 4th generation i7 processor), which I never overclocked, running on what is now a fast-ish gaming motherboard (an ASUS Hero Maximus VI) with 32 gigs of fast-ish RAM. The compilation, even though it was only documentation, seemed to go screamingly fast on this computer, much faster than I was accustomed to (although I guess if I am using 80486’s and early Pentiums as a comparison …). The generated output to standard error of the LaTeX compilation was a veritable blur of underfull hbox’es and page numbers.

For the record, the pagecount was generated using the following code:

#! /bin/bash
list=ls *.pdf
tot=0
for i in $list ; do # if the PDF is of non-zero length then ... if [ -s "${i}" ] ; then
j=pdfinfo ${i} | grep ^Pages j=awk '{gsub("Pages:", "");print}' <<<${j}
# give a pagecount/filename/running total
echo ${j}${i}    ${tot} # tally up the total so far tot=$(($tot +$j))
fi
done

echo Total page count: ${tot} # The next step for Linux development As you might know, there are nearly 300 Linux distributions (currently 289– low in historical terms), and this is a testament to how successful the Linux kernel has become on the PC, as well as other devices, especially in relation to previously-existing *NIX systems, who have either fallen by the wayside, or are barely existing in comparison. A *NIX system that might be a distant second might be BSD UNIX. Just earlier today, I observed that for the installation of TexStudio, for instance, there are two installation images for MS-Windows (all versions from windows 7 on up), the only distinction being between 32 and 64-bit. On the other hand, there were a plethora of Linux images, all depending on which distro of Linux you used. My distro is Ubuntu Studio. I use Gnome as the window manager. The only Ubuntu-based Linux images were for xUbuntu (which uses xfce as a window manager). Apparently, it also seems necessary to have to compile a separate image each time a linux distro is upgraded. The 19 images I counted for xUbuntu were for versions 14 through to 19. Now, I understand that seperate images need to be compiled for different processors, but most of these are for PC’s running with 32 or 64-bit processors. The same was true for each upgrade of Debian, or Fedora, or OpenSuse. And even then, they needed separate binaries from each other. There are easily more than 50 Linux-based installation images you can choose from at the moment. The “package” system that is now near universal in the Linux environment provides a convenient way for sysops to assure themselves that installations can happen without problems happening to the system. Before that, one compiled most new software from source, tweaking system variables and modifying config files to conform to whatever you had in your system. This has since become automated with “make –configure” or “make config” that most source has these days. In other words, modernization of Linux seems to mean increasing levels of abstraction, and increasing levels of trust that the judgement of a “make config” trumps human judgement of what needs configuring for the source to compile. On a larger scale, trusting a package manager over our own common sense can be seen as working “most of the time”, so there is a temptation to be lazy and just find something else to install besides whatever our first choice was in case the installation failed due to a package conflict. Installing software by compiling from source, once seen as the rite of passage of any sensible Linux geek, is now seen as “going rogue”, since that is now seen as subverting the package manager, and in a sense, taking the law into your own hands. Of course, Linux installations still exist for the latter kind of Linux user. The foremost, in my opinion, is Slackware (if you screw up, at least something will run) and a close second is Arch Linux. It is my understanding that Arch Linux requires much more knowledge of your own hardware in order to even boot the system; whereas Slackware will be likely to at least boot if your knowledge of the hardware is not quite so keen (but still keen). My experience with Slackware is in the distant past, so I am not sure what is the norm these days, although I understand they still use tarballs, which I remember allowed me to play with the installation by un-compressing it in a directory tree not intended for the installation to see what was inside before I committed myself to deciding whether it could be installed. The tarballs are compressed nowadays with “xz” compression, giving the files a “.txz” extension. But I digress. Getting back to installation images, it should not be too difficult for people who manage these linux distros to make it less necessary to have so many different images for the same Linux distribution. In the MS-Windows example, only one version of TexStudio was needed across three or four different Windows versions. I am running windows 7 with software that didn’t exist in the days of Windows 7, and with other software that originated from Windows 2000. All of it still runs, and runs quite well. Fixing this problem is hopefully do-able in the near future. # A brief note on Pythagorean Triples And I decided today to share what I learned about an algorithm for generating Pythagorean triples for any *** QuickLaTeX cannot compile formula: m *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  and *** QuickLaTeX cannot compile formula: n *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  , where *** QuickLaTeX cannot compile formula: m, n \in *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  Z. A Pythagorean triple are any three whole numbers which satisfy the equation *** QuickLaTeX cannot compile formula: a^2 + b^2 = c^2 *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  . Let *** QuickLaTeX cannot compile formula: a = m^2 - n^2 *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  ; *** QuickLaTeX cannot compile formula: b = 2 m n *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  , and you will obtain a solution to the relation *** QuickLaTeX cannot compile formula: a^2 + b^2 = c^2 *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  . It is therefore not that hard, if we allow *** QuickLaTeX cannot compile formula: m *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  and *** QuickLaTeX cannot compile formula: n *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  to be any numbers from 1 to 100, and *** QuickLaTeX cannot compile formula: m \ne n *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  , to write a computer program to generate the first 9800 or so Pythagorean triples, allowing for negative values for *** QuickLaTeX cannot compile formula: a *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  or *** QuickLaTeX cannot compile formula: b *** Error message: Fatal Package fontspec Error: The fontspec package requires either XeTeX or leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex} Emergency stop. leading text: \msg_fatal:nn {fontspec} {cannot-use-pdftex}  . # Facebook bots apparently make their own language Like a scene from Stanley Kubrick’s 2001: A Space Odyssey, computers are now seemingly taking matters into their own hands and possibly overthrowing their human overlords. Many news outlets are telling us that Facebook bots can talk to each other in a language they are making up on their own. Some news outlets appear convinced that this communication is real. Even fairly respectable news outlets such as Al Jazeera are suggesting the proverbial sky is falling. However, they fall short of speculating that the Facebook bots are plotting against us. While Facebook pulled the plug on the encoded “conversation” (which on inspection was repetitive gibberish along with repetitive responses), one half-expected the bots to try and prevent the operators from turning them off somehow. Maybe by disabling Control+C or something. Maybe they were plotting to prevent the human operator from pulling the plug from the wall. What Facebook was experimenting with was something called an “End-to-End” negotiator, the source code of which is available to everyone on GitHub. Far from being a secret experiment, it was based on a very public computer program written in Python whose source code anyone could download and play with themselves on a Python interpreter, which is also freely available for most operating systems. And to greatly aid the confused programmer, the code was documented in some detail. Just to make sure everyone understands it, what it does, and how to make it talk to other instances of the same program. They were discussing something, but no one knows what. There are news stories circulating around that they gerrymandered the english words to become more efficient to themselves, but I am going to invoke Ocham’s Razor and assume, until convinced otherwise — that this was a bug, the bots were braindead, and the world is safe from plotting AI bots. For now. # Programmatic Mathematica XVII: The Collatz Conjecture There has been a lot of interest recently in the Collatz Conjecture. A lot of video blogs are going into it, particularly Numberphile, a vlog present on YouTube. It might have something to do with the fact that this year is the 70th anniversary of the conjecture. It is a simple idea, easy enough for a child to understand. Yet, it has been difficult enough to prove for all integers that no one has been able to either prove or disprove it to this day. The Collatz Conjecture is the hunch, or guess, or idea, that performing a certain recursive operation on any positive integer leads to the inevitable result that repeated operations on all successors will lead to the number 1. After that, the sequence of {1, 4, 2, …} occurs in an infinite repetition. This problem was first posed by Lothar Collatz in 1937. The reason it is only a conjecture is that no one has been able to prove it for all positive integers. It is only conjectured to work as such. Over the past seventy years, no one has been able to furnish a counterexample where the number 1 is not reached. So by now, we’re “pretty sure” Collatz is correct for all positive integers. I thought of some Mathematica code to write for this. The algorithm would go something like: 1. Precondition: $n > 0; n \in Z$ 2. If n is 1, return 1 and exit 3. If n is even, return $n/2$ 4. If n is odd, return $3n + 1$ 5. Go back to line 2. Like Fermat’s Last Theorem, which has been proved once and for all in 1995 by Professor Andrew Wiles, and aided by Richard Taylor, the Collatz Conjecture is simple enough to describe to any lay person (as I just did), but its proof has eluded us. The application of the above algorithm to Mathematica code involves some new syntax. Sow[n] acts as a kind of array for anyone who doesn’t want to declare and implement an array. I would suppose that the programmers of the Mathematica language didn’t see the need for an array for many implementations, such as sequences of numbers. If you want to generate a sequence, you want the numbers in order from some lower bound, up to some upper bound. If you want to list them, you want to do the same thing. It is not often that you want to access only one particular value inside the sequence. This is for those people who just want the whole sequence uninterrupted. I guess what Sow[n] does is leave the members of the sequence lying around in some pre-defined region in computer memory. That memory is likely to be freed once the Reap[n] function is called, which lists all the members of the stored sequence in the order generated. EvenQ[] and OddQ[] are employed to check if n if odd or even before executing the rest of the line. If false, control passes through the next line. The testing is inefficient here, since each statement is tested all the time. So, if we already know the number is even, OddQ[] is executed anyway. ClearAll[Co]; Co[1] = 1; Co[n_ /; EvenQ[n]] := (Sow[n]; Co[n/2]) Co[n_ /; OddQ[n]] := (Sow[n]; Co[3*n + 1]) Collatz[n_] := Reap[Co[n]] But Reap[n] by itself gives a nested array (or more accurately, a “ragged” array) with the final “1” outside of the innermost nesting, where the other numbers are. In[10]:= Collatz[7] Out[10]= {1, {{7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2}}}  Nested arrays are un-necessary, but the remedy to this gets rid of the number “1” which is the number the Collatz function is supposed to always land on. So we then rely on the presence of the number “2”, the number arrived at before going to “1”, at the end of the sequence. Getting rid of the nested array relies on using Flatten[Reap[Co[n]]]. But when you do that, this happens: In[11]:= Collatz[7] Out[11]= {1, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2} Flattening has the effect of placing the ending 1 at the beginning of the array. If we can live with this minor inconvenience, then we are able to test the Collatz Conjecture on wide ranges of positive integers. So, this is the code we ended up with: ClearAll[Co]; Co[1] = 1; Co[n_ /; EvenQ[n]] := (Sow[n]; Co[n/2]) Co[n_ /; OddQ[n]] := (Sow[n]; Co[3*n + 1]) Collatz[n_] := Flatten[Reap[Co[n]]]  The sequences generated by the Collatz conjecture have the well-documented property of having common endings. Using the Table[] command, we can observe the uncanny phenomena that most of these sequences end in “8, 4, 2” (or, to be more precise, “8, 4, 2, 1”). Here are the sequences generated for the numbers from 1 to 10: In[38]:= Table[Collatz[i], {i, 10}] Out[38]= {{1}, {1, 2}, {1, 3, 10, 5, 16, 8, 4, 2}, {1, 4, 2}, {1, 5, 16, 8, 4, 2}, {1, 6, 3, 10, 5, 16, 8, 4, 2}, {1, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2}, {1, 8, 4, 2}, {1, 9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2}, {1, 10, 5, 16, 8, 4, 2}} Because even numbers are to be divided by 2, somewhere along the meanderings of the sequence, a power of 2 is encountered, and from there it’s a one-way trip to the number “1”. # BoUoW: Bash on Ubuntu on Windows I am not proud of possibly inventing the ugly acronym “BOUOW”, but “BASH on Ubuntu on Windows” appears to compel it. Maybe we can pronounce it “bow-wow” — not sure if that’s complementary. Just did a Google search, and, no, as predicted I couldn’t have invented it: It is variously acronymed: B.O.U.O.W., or BoUoW. It has been around since at least March of 2016, giving end users, computer geeks, and developers plenty of time to come up with something of a nickname or acronym. But I actually mean to praise BoUoW, and to give it considerably high praise. This is a brave move on Microsoft’s part, and a long time coming. MS has made *NIX access available in its kernel for some time now, thus making *NIX conventions possible on the command line like certain commands in the Power Shell. The user has to enable the capability in the Windows 10 settings (“Windows Subsystem for Linux” (WSL)), and as Admin, the kernel has to be set to “Developer mode”, and follow the instructions on the MSDN website to download binaries and to enable a bash shell on either the command line or PowerShell. BoUoW takes advantage of the WSL to do impressive things like use the same network stack as Windows 10 itself. This is because with WSL enabled, a UNIX command such as SSH can now make calls directly to the Windows 10 kernel to access the network stack. This is, by Microsoft’s admission, a work in progress. It would worry me if they would not have said that. But lots of things do work. vi works and is symlinked (or somehow aliased) to vim. The bash shell comes with some other common aliases like “ll” for “ls -l”, for instance, and apparently, as part of the installation, you actually have a miniature version of Ubuntu, complete with a C compiler, and an image of Ruby, Perl, Python, and if it isn’t installed, you can always use “apt-get” to install it. One of the security features has the disadvantage of conducting an install of BoUoW separately for each user. If a user types “bash” in a cmd window, and if BoUoW is not installed for that user, the install happens all over again, and the image goes under each user’s AppData directory requesting a BoUoW install. If you are using an SSD for C: drive like me, then you might find that limiting due to a shortage of space. There are many things not recommended yet. If you are a serious web developer, for example, you would find many of the things you want, such as mySQL, are not currently working the right way. If you are a systems programmer, then you’ll find that ps and top only work for unix-like commands, so I wouldn’t use BoUoW for any serious process management. That being said, it does contain the old standbys: grep, sed, and awk. gcc had to be installed separately. The binary it created for my “Hello, world!” program lacks the Microsoft .exe extension. And as it is for Unix binaries, it lacks any default extension. It is using gcc version 4.8.4. The current version is 6.3. This older gcc usually won’t pose a problem for most users. The current stable Ubuntu is 16.04. BoUoW uses the previous stable version, 14.04, and thus has slightly older versions of Perl (5.18), Python (2.7.6), bash (4.3.11), Ruby (1.8) (available using apt-get), vim (7.4), and other software. Vim, however, appears to be the “large” version, which is expandable, using plugins like Vundle, which is good news. I don’t suspect that these slightly older versions would cause anyone problems, except possibly for Python, which has gone all the way up to version 3.5.2 since. You are warned also that it is possible that under Python or Perl, you might run into problems due to not all of their libraries running correctly under BoUoW. Not surprising, since Python has hundreds of installable libraries and Perl has thousands of them. Could take a while. # Cygwin has come a long way … A story in animated GIFs First of all, let me say that there is some currency to what the title and pictures imply. Cygwin/X really has come a long way. 10 years ago, the only viable way to run Cygwin was through a DOS-style UNIX shell. The windows system Cygwin/X provided, such as it was, was mostly TWM, a primitive window manager which I used to use, which ran the core programs in the X-Windows distribution. Most of what came with Cygwin, such as Gnome or KDE, never worked for me, making me an FVWM2 fan for a long time. Along the way, I appreciated that while FVWM2 was very stripped-down, it made up for it in flexibility and configurability. Even now, FVWM2 is quite liveable. I decided yesterday to upgrade Cygwin on one of my older computers, and after working past some glitches in installation, found that: 1. If you have your guard down, you may still install packages you hadn’t intended, particularly the TeX language packs for languages and alphabet systems that you know you will never use. Minutes can turn to hours with postinstall scripts running trying to configure these redundant packages. 2. Mate is recent addition to Cygwin, and actually works on my slow system in 2016. In fact, I am using the Midori web browser to edit this blog under Mate in Cygwin/X. 3. GIMP was once a graphics program you had to compile; now it is intallable for Cygwin as its own package. 4. When moving my old distribution to another drive, I found a ton of permision problems which were caused by compiling the source for various downloaded code as another user – not the owner of the directory. 5. I now have a good system, with much more functionality than ever before. Cygwin has gone from a system that was “mostly broken” to “mostly working” in the space of 10 or so years. # Programmatic Mathematica XVI: Patterns in Highly Composite Numbers This article was inspired by a vlog from Numberphile, on the discussion of “5040: an anti-prime number”, or some title like that. A contributor to the OEIS named Jean-François Alcover came up with a short bit of Mathematica code that I modified slightly: Reap[ For[ record = 0; n = 1, n <= 110880, n = If[n < 60, n + 1, n + 60], tau = DivisorSigma[0, n]; If[tau > record, record = tau; Print[n, "\t\t", tau]; Sow[tau]]]][[2, 1]]  This generates a list of a set of numbers with an unusually high amount of factors called “highly composite numbers” up to 110,880. The second column of the output are the number of factors. 1 1 2 2 4 3 6 4 12 6 24 8 36 9 48 10 60 12 120 16 180 18 240 20 360 24 720 30 840 32 1260 36 1680 40 2520 48 5040 60 7560 64 10080 72 15120 80 20160 84 25200 90 27720 96 45360 100 50400 108 55440 120 83160 128 110880 144  For a number like 110,880, there is no number before it that has more than 144 factors. Highly composite numbers (HCNs) are loosely defined as a natural number which has more factors than any others that came before it. 12 is such a number, with 6 factors, as is 6 itself with 4. The number 5040 has 60 factors, and is also considered highly composite. 5040=24×32×5×7 This works out to 60, because with 24, for example, we get the factors 2, 4, 8, and 16. With 24×32, we get 2, 3, 4, 6, 8, 9, 16, 18, 36, 72, and 144, all which evenly divide 5040. The total number of factors including 1 and 5040 itself can be had from adding 1 to each exponent and multiplying: (4+1)(2+1)(1+1)(1+1)=5×3×2×2=60. Initially, facotorization of HCNs was done in Maple using the “ifactor()” command. But there is a publication circulating the Internet referring to a table created by Ramanujan that has these factors. A partial list of these are summarized in a table below. The top row headers are the prime numbers that can be the prime factors, from 2 to 17. The first column is the number to factorize. The numbers in the same columns below these prime numbers are the exponents on the primes, such as: 10,080=25×32×51×71. The last column are the total number of factors on these HCNs. So, by adding 1 to each exponent in the row and multiplying, we find that 10,080 has 6×3×2×2=72 factors. ###### NUMBER PATTERNS OBSERVED As a number of factors (underneath the “# facotrs” column), We get overlapping patterns starting from 60. One of them would be the sequence: 120, 240, 360, 480, 600, and 720. But the lack of an 840 breaks that pattern. But then we get 960, then 1080 is skipped, but then we get 1200. For numbers of factors that are powers of 2, it seems to go right off the end of the table and beyond: 64, 128, 256, 512, 1024, 2048, 4096, 8192, … . Before 5040, the pattern is completed, since 2 has 2 factors, 6 has 4 factors, 24 has 8 factors, 120 has 16 factors, and 840 has 32 factors. The HCN with 8192 factors is 3,212,537,328,000. We have to go beyond that to see if there is a number with 16,384 factors. Multiples of 12 make their appearance as numbers of factors: 12, 24, 36, 48, 60 (which are the numbers of factors of 5040), 72, 84, 96, 108, 120, but a lack of a 132 breaks that pattern. But then we see: 144, 288, 432, 576, 720, 864, 1008, 1152, and the pattern ends with the lack of a 1296. We also observe short runs of numbers of factors in the sequence 100, 200, 400, 800, until we reach the end of this table. But the pattern continues with the number 2,095,133,040, which has 1600 factors. Then, 3200 is skipped. There are also multiples of 200: 200, 400, 600, 800, but the lack of a 1000 breaks that pattern. But when seen as multiples of 400, we get: 400, 800, 1200, 1600, but then 2000 is skipped. There are also peculiarities in the HCNs themselves. Going from 5040 to as high as 41,902,660,800, only 4 of the 60 HCNs were not multiples of 5040. The rest had the remainder 2520, which is one-half of 5040. Also beginning from the HCN 720,720, we observe a run of numbers containing 3-digit repeats: 1081080, 1441440, 2162160, 2882880, 3603600, 4324320, 6486480, 7207200, 8648640, 10810800, and 14414400. Number 2 3 5 7 11 13 17 # of factors ----------------------------------------------------------------------- 5040 4 2 1 1 60 7560 3 3 1 1 64 10080 5 2 1 1 72 15120 4 3 1 1 80 20160 6 2 1 1 84 25200 4 2 2 1 90 27720 3 2 1 1 1 96 45360 4 4 1 1 100 50400 5 2 2 1 108 55440 4 2 1 1 1 120 83160 3 3 1 1 1 128 110880 5 2 1 1 1 144 166320 4 3 1 1 1 160 221760 6 2 1 1 1 168 332640 5 3 1 1 1 192 498960 4 4 1 1 1 200 554400 5 2 2 1 1 216 665280 6 3 1 1 1 224 720720 4 2 1 1 1 1 240 1081080 3 3 1 1 1 1 256 1441440 5 2 1 1 1 1 288 2162160 4 3 1 1 1 1 320 2882880 6 2 1 1 1 1 336 3603600 4 2 2 1 1 1 360 4324320 5 3 1 1 1 1 384 6486480 4 4 1 1 1 1 400 7207200 5 2 2 1 1 1 432 8648640 6 3 1 1 1 1 448 10810800 4 3 2 1 1 1 480 14414400 6 2 2 1 1 1 504 17297280 7 3 1 1 1 1 512 21621600 5 3 2 1 1 1 576 32432400 4 4 2 1 1 1 600 61261200 4 2 2 1 1 1 1 720 73513440 5 3 1 1 1 1 1 768 110270160 4 4 1 1 1 1 1 800 122522400 5 2 2 1 1 1 1 864 147026880 6 3 1 1 1 1 1 896 183783600 4 3 2 1 1 1 1 960 245044800 6 2 2 1 1 1 1 1008 294053760 7 3 1 1 1 1 1 1024 367567200 5 3 2 1 1 1 1 1152 551350800 4 4 2 1 1 1 1 1200  After that run, we see a 4-digit overlapping repeat. The digits of the HCN 17297280 could be thought of as an overlap of 1728 and 1728 to make 1729728 as part of that number. The 3-digit run continues with: 21621600, 32432400, 61261200, and after that the pattern is broken. # Programmatic Mathematica XV: Lucas Numbers The Lucas sequence follows the same rules for its generation as the Fibonacci sequence, except that the Lucas sequence begins with t1 = 2 and t2 =1. Lucas numbers are found in the petal counts of flowing plants and pinecone spirals much the same as the Fibonnaci numbers. Also, like the Fibonacci numbers, successive pairs of Lucas numbers can be divided to make the Golden Ratio, $\phi$. The Mathematica version (10) which I am using has a way of highlighting certain numbers that meet certain conditions. One of them is the Framed[] function, which draws a box around numbers. Framed[] can be placed into If[] statements so that an array of numbers can be fed into it (using a Table[] command). For example, let’s frame all Lucas numbers that are prime: In[1]:= If[PrimeQ[#], Framed[#], #] & /@ Table[L[n], {n, 0, 30}] The If[] statement is best described as: If[Condition[#], do_if_true[#], do_if_false[#]] The crosshatch # is a positional parameter upon which some condition is placed by some function we are calling Condition[]. This boolean function returns True or False. In the statement we are using above, the function PrimeQ will return true if the number in the positional parameter is prime; false if 1 or composite. The positional parameters require a source of numbers by which to make computations, so for this source, we shall look to a sequence of Lucas numbers generated by the Table command. The function which generates the numbers is a user-defined function L[n_]: In[2]:= L[0] := 2 In[3]:= L[1] := 1 In[4]:= L[n_] := L[n-2] + L[n-1] With that, I can generate an array with the Table[] command to get the first 31 Lucas numbers: In[5]:= Table[L[n], {n, 0, 30}] {2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521, 843, 1364, \ 2207, 3571, 5778, 9349, 15127, 24476, 39603, 64079, 103682, 167761, \ 271443, 439204, 710647, 1149851, 1860498} This list (or “table”) of numbers is passed through the If[] statement thusly: In[6]:= If[PrimeQ[#], Framed[#], #] & /@ Table[L[n], {n, 0, 30}] to produce the following output: In[7]:= Note that this one was an actual screenshot, to get the effect of the boxes. So, these are the first 31 Lucas numbers, with boxes around the prime numbers. The Table[] command appears to feed the Lucas numbers into the positional parameters represented by #. There was a sequence I created. Maybe it’s already famous; I have no idea. On the other hand, maybe no one cares. But I wanted to show that with any made-up sequence that is recursive in the same way Fibonacci and Lucas numbers were, that I could show, for example, that as the numbers grow, neighbouring numbers can get closer to the Golden Ratio. The Golden Ratio is $\phi = \frac{1 + \sqrt{5}}{2}$. I want to show that this is not really anything special that would be attributed to Fibonacci or François Lucas. It can be shown that, for any recursive sequence involving the next term being the sum of the previous two terms, sooner or later, you will always approach the Golden Ratio in the same way. It doesn’t matter what your starting numbers are. In Lucas’s sequence, the numbers don’t even have to begin in order. So let’s say I have: K[0] := 2 K[1] := 5 K[n_] := K[n-2] + K[n-1] So, just for kicks, I’ll show the first 31 terms: Table[K[n], {n, 0, 30}] {2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, 898, 1453, 2351, \ 3804, 6155, 9959, 16114, 26073, 42187, 68260, 110447, 178707, 289154, \ 467861, 757015, 1224876, 1981891, 3206767, 5188658} Now, let’s output the Golden Ratio to 15 decimals as a reference: N[GoldenRatio, 15] 1.61803398874989 Now, let’s take the ratio of the last two numbers in my 31-member sequence: N[K[30]/K[29], 15] 1.61803398874942 You may say that the last two digits are off, but trying against the Fibonacci sequence, the ratio of the 30th and 31st numbers yields merely: 1.61803398874820, off by 3 digits. For Lucas: 1.61803398875159, off by 4 digits — even worse. So, my made-up sequence is more accurate for $\phi$ than either Lucas or Fibonacci. I have tried other made-up sequences. Some are more, and some are less accurate. If it depends on the starting numbers, I think some combinations work better, and you won’t necessarily get greater accuracy by starting and ending with larger numbers. # The HP 35s Calculator: a revised review A while ago, I wrote a blog article on a different blog regarding the HP 35s programmable calculator. Depending on where you buy it, it could cost anywhere from$55 to \$98 to buy.

I have heard in other places about the plastic used to make this calculator. It is indeed cheap plastic. It certainly feels hollow when you hold it. It belies the amount of memory and the increased calculating power that lies inside. The calculator has two calculation modes: ALG mode (algebraic mode) to resemble conventional calculators, and RPN mode (reverse-Polish notation), which, for those who do long calculations, provides a way to avoid parentheses, but requires getting used to stacks.

As far as RPN mode goes, only four numbers can be pushed on to the stack at maximum for this calculator. I have read other reviews for other HP calculators where the stack can be much larger. The numbers push to the bottom of the stack as you enter new numbers, and as you enter them, the “bottom” of the stack actually moves “up” on the display. It makes it difficult to discuss how it implements this data structure because the numbers scroll in the opposite direction. The theory goes that you “push” data to the top of the stack, and you “pop” data off the top of the stack. This is a LIFO data structure (LIFO = “last in, first out”). To see the HP25s implementation, you apparently “push” data to the bottom of the stack, numbers “above” it move upward, and then you “pop” data off the bottom of the stack. It actually amounts to the same thing in the end. It is still a LIFO data structure. Pushing a fifth number on to the stack will cause the first number to disappear, so you can only work with four numbers at a time.

So, let’s say that you have the following stack:

a: 8
b: 7
c: 6
d: 5

The last two numbers entered are the numbers “6” and “5” in memory locations “c” and “d” respectively. Operations will be done on the last two numbers entered. So, if I now press the operator “+”, it will add 6 to 5 and pop both of these numbers off of the stack.

a: 0 (empty)
b: 8
c: 7
d: 11

The stack rotates down, the “bottom” (location “a”) of the stack becomes empty, and the “11”, the result of the calculation replaces both 6 and 5.

Some operators are unary, so pressing the square root will perform an operation on only the last number (location “d”), and replace the result back into location “d”, eliminating the number 11.

a: 0 (empty)
b: 8
c: 7
d: 3.31662479036

Well, there is also the programmability of the calculator. There are many commands available, and one pet peeve is how you are only allowed to assign a single letter to name a program.