The flap about flossing

There has been barrels of ink, moles of electrons, and weeks of nighttime talk shows used up over notion that over the past 30 to 40 years, we were all duped. Flossing does nothing for your teeth. All that guilt about not doing it, wasted over nothing. But let’s be precise here. The FDA is citing “lack of strong evidence”, not that it had no effect.

Journalists, with a BS in something that is not a Bachelor of Science, take it to mean that there was some kind of propaganda conspiracy to get us to floss un-necessarily. It is amazing that journalists think we are that stupid. I get stuff in my teeth that I can’t get with my toothbrush, and journalists are leading me to believe that the FDA is telling me that it’s OK to leave it there?

“No evidence” was cited, since, according to the FDA, no one did a serious study. It could have been that the FDA thought that the benefits are self-evident. Do people really need evidence that getting stuff out from between their teeth is better for their oral health than leaving the food stuck in there? The FDA probably thought, rightly, that tax money could be better spent elsewhere.

Programmatic Mathematica XVI: Patterns in Highly Composite Numbers

This article was inspired by a vlog from Numberphile, on the discussion of “5040: an anti-prime number”, or some title like that.

A contributor to the OEIS named Jean-François Alcover came up with a short bit of Mathematica code that I modified slightly:

Reap[
   For[
      record = 0; n = 1, n <= 110880, n = If[n < 60, n + 1, n + 60], tau = DivisorSigma[0, n]; 
      If[tau > record, record = tau; Print[n, "\t\t", tau];
      Sow[tau]]]][[2, 1]]

This generates a list of a set of numbers with an unusually high amount of factors called “highly composite numbers” up to 110,880. The second column of the output are the number of factors.

1      1
2       2
4       3
6       4
12      6
24      8
36      9
48      10
60      12
120     16
180     18
240     20
360     24
720     30
840     32
1260        36
1680        40
2520        48
5040        60
7560        64
10080       72
15120       80
20160       84
25200       90
27720       96
45360       100
50400       108
55440       120
83160       128
110880      144

For a number like 110,880, there is no number before it that has more than 144 factors.

Highly composite numbers (HCNs) are loosely defined as a natural number which has more factors than any others that came before it. 12 is such a number, with 6 factors, as is 6 itself with 4. The number 5040 has 60 factors, and is also considered highly composite.

5040=24×32×5×7
This works out to 60, because with 24, for example, we get the factors 2, 4, 8, and 16. With 24×32, we get 2, 3, 4, 6, 8, 9, 16, 18, 36, 72, and 144, all which evenly divide 5040. The total number of factors including 1 and 5040 itself can be had from adding 1 to each exponent and multiplying: (4+1)(2+1)(1+1)(1+1)=5×3×2×2=60.

Initially, facotorization of HCNs was done in Maple using the “ifactor()” command. But there is a publication circulating the Internet referring to a table created by Ramanujan that has these factors. A partial list of these are summarized in a table below. The top row headers are the prime numbers that can be the prime factors, from 2 to 17. The first column is the number to factorize. The numbers in the same columns below these prime numbers are the exponents on the primes, such as: 10,080=25×32×51×71. The last column are the total number of factors on these HCNs. So, by adding 1 to each exponent in the row and multiplying, we find that 10,080 has 6×3×2×2=72 factors.

NUMBER PATTERNS OBSERVED

As a number of factors (underneath the “# facotrs” column), We get overlapping patterns starting from 60. One of them would be the sequence: 120, 240, 360, 480, 600, and 720. But the lack of an 840 breaks that pattern. But then we get 960, then 1080 is skipped, but then we get 1200.

For numbers of factors that are powers of 2, it seems to go right off the end of the table and beyond: 64, 128, 256, 512, 1024, 2048, 4096, 8192, … . Before 5040, the pattern is completed, since 2 has 2 factors, 6 has 4 factors, 24 has 8 factors, 120 has 16 factors, and 840 has 32 factors. The HCN with 8192 factors is 3,212,537,328,000. We have to go beyond that to see if there is a number with 16,384 factors.

Multiples of 12 make their appearance as numbers of factors: 12, 24, 36, 48, 60 (which are the numbers of factors of 5040), 72, 84, 96, 108, 120, but a lack of a 132 breaks that pattern. But then we see: 144, 288, 432, 576, 720, 864, 1008, 1152, and the pattern ends with the lack of a 1296.

We also observe short runs of numbers of factors in the sequence 100, 200, 400, 800, until we reach the end of this table. But the pattern continues with the number 2,095,133,040, which has 1600 factors. Then, 3200 is skipped.

There are also multiples of 200: 200, 400, 600, 800, but the lack of a 1000 breaks that pattern. But when seen as multiples of 400, we get: 400, 800, 1200, 1600, but then 2000 is skipped.

There are also peculiarities in the HCNs themselves. Going from 5040 to as high as 41,902,660,800, only 4 of the 60 HCNs were not multiples of 5040. The rest had the remainder 2520, which is one-half of 5040.

Also beginning from the HCN 720,720, we observe a run of numbers containing 3-digit repeats: 1081080, 1441440, 2162160, 2882880, 3603600, 4324320, 6486480, 7207200, 8648640, 10810800, and 14414400.

Number 2   3   5   7   11  13  17  # of
                                factors
-----------------------------------------------------------------------
5040    4   2   1   1               60  
7560    3   3   1   1               64  
10080   5   2   1   1               72  
15120   4   3   1   1               80  
20160   6   2   1   1               84  
25200   4   2   2   1               90  
27720   3   2   1   1   1           96  
45360   4   4   1   1               100 
50400   5   2   2   1               108 
55440   4   2   1   1   1           120 
83160   3   3   1   1   1           128 
110880  5   2   1   1   1           144 
166320  4   3   1   1   1           160 
221760  6   2   1   1   1           168 
332640  5   3   1   1   1           192 
498960  4   4   1   1   1           200 
554400  5   2   2   1   1           216 
665280  6   3   1   1   1           224 
720720  4   2   1   1   1   1       240 
1081080 3   3   1   1   1   1       256 
1441440 5   2   1   1   1   1       288 
2162160 4   3   1   1   1   1       320 
2882880 6   2   1   1   1   1       336 
3603600 4   2   2   1   1   1       360 
4324320 5   3   1   1   1   1       384 
6486480 4   4   1   1   1   1       400 
7207200 5   2   2   1   1   1       432 
8648640 6   3   1   1   1   1       448 
10810800    4   3   2   1   1   1       480 
14414400    6   2   2   1   1   1       504 
17297280    7   3   1   1   1   1       512 
21621600    5   3   2   1   1   1       576 
32432400    4   4   2   1   1   1       600 
61261200    4   2   2   1   1   1   1   720 
73513440    5   3   1   1   1   1   1   768 
110270160   4   4   1   1   1   1   1   800 
122522400   5   2   2   1   1   1   1   864 
147026880   6   3   1   1   1   1   1   896 
183783600   4   3   2   1   1   1   1   960 
245044800   6   2   2   1   1   1   1   1008    
294053760   7   3   1   1   1   1   1   1024    
367567200   5   3   2   1   1   1   1   1152    
551350800   4   4   2   1   1   1   1   1200    

After that run, we see a 4-digit overlapping repeat. The digits of the HCN 17297280 could be thought of as an overlap of 1728 and 1728 to make 1729728 as part of that number. The 3-digit run continues with: 21621600, 32432400, 61261200, and after that the pattern is broken.

Programmatic Mathematica XV: Lucas Numbers

The Lucas sequence follows the same rules for its generation as the Fibonacci sequence, except that the Lucas sequence begins with t1 = 2 and t2 =1.

Lucas numbers are found in the petal counts of flowing plants and pinecone spirals much the same as the Fibonnaci numbers. Also, like the Fibonacci numbers, successive pairs of Lucas numbers can be divided to make the Golden Ratio, \phi. The Mathematica version (10) which I am using has a way of  highlighting certain numbers that meet certain conditions. One of them is the Framed[] function, which draws a box around numbers. Framed[] can be placed into If[] statements so that an array of numbers can be fed into it (using a Table[] command).

For example, let’s frame all Lucas numbers that are prime:

In[1]:= If[PrimeQ[#], Framed[#], #] & /@ Table[L[n], {n, 0, 30}]

The If[] statement is best described as:

If[Condition[#], do_if_true[#], do_if_false[#]]

The crosshatch # is a positional parameter upon which some condition is placed by some function we are calling Condition[]. This boolean function returns True or False. In the statement we are using above, the function PrimeQ will return true if the number in the positional parameter is prime; false if 1 or composite.

The positional parameters require a source of numbers by which to make computations, so for this source, we shall look to a sequence of Lucas numbers generated by the Table command. The function which generates the numbers is a user-defined function L[n_]:

In[2]:= L[0] := 2
In[3]:= L[1] := 1
In[4]:= L[n_] := L[n-2] + L[n-1]

With that, I can generate an array with the Table[] command to get the first 31 Lucas numbers:

In[5]:= Table[L[n], {n, 0, 30}]
{2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521, 843, 1364, \
2207, 3571, 5778, 9349, 15127, 24476, 39603, 64079, 103682, 167761, \
271443, 439204, 710647, 1149851, 1860498}

This list (or “table”) of numbers is passed through the If[] statement thusly:

In[6]:= If[PrimeQ[#], Framed[#], #] & /@ Table[L[n], {n, 0, 30}]

to produce the following output:

In[7]:=Array_Frame_Lucas_Prime

Note that this one was an actual screenshot, to get the effect of the boxes. So, these are the first 31 Lucas numbers, with boxes around the prime numbers. The Table[] command appears to feed the Lucas numbers into the positional parameters represented by #.

There was a sequence I created. Maybe it’s already famous; I have no idea. On the other hand, maybe no one cares. But I wanted to show that with any made-up sequence that is recursive in the same way Fibonacci and Lucas numbers were, that I could show, for example, that as the numbers grow, neighbouring numbers can get closer to the Golden Ratio. The Golden Ratio is \phi = \frac{1 + \sqrt{5}}{2}. I want to show that this is not really anything special that would be attributed to Fibonacci or François Lucas. It can be shown that, for any recursive sequence involving the next term being the sum of the previous two terms, sooner or later, you will always approach the Golden Ratio in the same way. It doesn’t matter what your starting numbers are. In Lucas’s sequence, the numbers don’t even have to begin in order. So let’s say I have:

K[0] := 2
K[1] := 5

K[n_] := K[n-2] + K[n-1]

So, just for kicks, I’ll show the first 31 terms:

Table[K[n], {n, 0, 30}]
{2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, 898, 1453, 2351, \
3804, 6155, 9959, 16114, 26073, 42187, 68260, 110447, 178707, 289154, \
467861, 757015, 1224876, 1981891, 3206767, 5188658}

Now, let’s output the Golden Ratio to 15 decimals as a reference:

N[GoldenRatio, 15]
1.61803398874989

Now, let’s take the ratio of the last two numbers in my 31-member sequence:

N[K[30]/K[29], 15]
1.61803398874942

You may say that the last two digits are off, but trying against the Fibonacci sequence, the ratio of the 30th and 31st numbers yields merely: 1.61803398874820, off by 3 digits.

For Lucas: 1.61803398875159, off by 4 digits — even worse.

So, my made-up sequence is more accurate for \phi than either Lucas or Fibonacci. I have tried other made-up sequences. Some are more, and some are less accurate. If it depends on the starting numbers, I think some combinations work better, and you won’t necessarily get greater accuracy by starting and ending with larger numbers.

More junk science: the 90-day Accu-weather forecast

Weather forecasting is a black art at the best of times. You can look at the existing weather pattern today, then using probability models based on those weather patterns, forecast tomorrow’s weather. We all know that this only works sometimes over a 5-day period. But now there are people who want to sell services for pinpointing weather conditions on  a daily basis for a 90 day period.

A private company known as AccuWeather is offering the waiting public a 90-day weather forecast, selling to consumers the feeling of control over the distant future. There is not much anyone can say about weather in the long term except that “winter is cold” and “summer is hot”. The rest is all the stuff of farmer’s almanacs and crystal ball gazers. AccuWeather is not telling anyone (at least not yet) how they are able to forecast specific weather conditions on specific days over a 90-day period. But this is what they are doing. So apparently, you can know how to pack your suitcase for that trip to New York 60 days from now, since you will know that on that day there will be 1.5 inches of rain.

This is not that new. AccuWeather already has had a 45-day forecast, and so, according to their press release from April 11, 2016, they are providing a 90-day forecast, driven to “greater challenges” such as this by consumer demand. They claim to be able to forecast on this scale with Superior Accuracy™. (Yes, that phrase is trademarked by AccuWeather).

This forecasting is not endorsed by any existing government weather service, college meteorology department, or university professor that knows anything about forecasting the weather. But I am sure that there are enough gullible people that want that feeling of control who find things like “truth” (the truth that weather is chaotic, and too influenced by “the butterfly effect” to be knowable on this scale) to be inconvenient. AccuWeather probably knows that no one will take this seriously except a small group that just wants that feeling of predictability and control in their lives. And that is what AccuWeather is really selling: your feelings.

The HP 35s Calculator: a revised review

hp35s
The HP 35s Programmable Calculator

A while ago, I wrote a blog article on a different blog regarding the HP 35s programmable calculator. Depending on where you buy it, it could cost anywhere from $55 to $98 to buy.

I have heard in other places about the plastic used to make this calculator. It is indeed cheap plastic. It certainly feels hollow when you hold it. It belies the amount of memory and the increased calculating power that lies inside. The calculator has two calculation modes: ALG mode (algebraic mode) to resemble conventional calculators, and RPN mode (reverse-Polish notation), which, for those who do long calculations, provides a way to avoid parentheses, but requires getting used to stacks.

As far as RPN mode goes, only four numbers can be pushed on to the stack at maximum for this calculator. I have read other reviews for other HP calculators where the stack can be much larger. The numbers push to the bottom of the stack as you enter new numbers, and as you enter them, the “bottom” of the stack actually moves “up” on the display. It makes it difficult to discuss how it implements this data structure because the numbers scroll in the opposite direction. The theory goes that you “push” data to the top of the stack, and you “pop” data off the top of the stack. This is a LIFO data structure (LIFO = “last in, first out”). To see the HP25s implementation, you apparently “push” data to the bottom of the stack, numbers “above” it move upward, and then you “pop” data off the bottom of the stack. It actually amounts to the same thing in the end. It is still a LIFO data structure. Pushing a fifth number on to the stack will cause the first number to disappear, so you can only work with four numbers at a time.

So, let’s say that you have the following stack:

a: 8
b: 7
c: 6
d: 5

The last two numbers entered are the numbers “6” and “5” in memory locations “c” and “d” respectively. Operations will be done on the last two numbers entered. So, if I now press the operator “+”, it will add 6 to 5 and pop both of these numbers off of the stack.

a: 0 (empty)
b: 8
c: 7
d: 11

The stack rotates down, the “bottom” (location “a”) of the stack becomes empty, and the “11”, the result of the calculation replaces both 6 and 5.

Some operators are unary, so pressing the square root will perform an operation on only the last number (location “d”), and replace the result back into location “d”, eliminating the number 11.

a: 0 (empty)
b: 8
c: 7
d: 3.31662479036

Well, there is also the programmability of the calculator. There are many commands available, and one pet peeve is how you are only allowed to assign a single letter to name a program.

I don’t often talk about politics, …

If government was always seen as the shadow cast by big business, I guess someone must have felt it was time for big business to be directly at the controls of power.

I feel the need to comment on the Trump rout of the Republican party. I can see that he has resonated with a lot of normally disenfranchised voters: the poor, people of color, religious people, and so on. He was thought of as a clown doing what Sarah Palin used to do, which was pretty much the same thing. Problem is that Trump does it much better than Palin ever did. And by doing so, he lays a blow to the solar plexus of the Republican Party.

But what really intrigues me is that the other candidates are so shocked, simply shocked (to borrow a line from Casablanca), to find that voters are gravitating to a candidate who embodies all of the most misogynist, ignorant, and racist elements of the country. Indeed, Trump’s campaign is also so full of self-contradictions that on a close look, it is difficult to know what he stands for, except it is against whatever traditional Republicans stand for. Even searches for “Donald Drumpf”, his original family name according to HBO’s This Week Tonight is getting more Google searches than any of the competing Republican candidates.

American society has been ripe for just the kind of candidate Trump is. It’s just that no one wants to admit it. I read in The New York Times that Republicans were critical of Trump supporters for not caring about the “traditions” of the Republican party. But in the way that politics has adopted business terminology, the voters, like consumers, are attracted to the Trump “brand”. If we are on the level of “branding” names of people like soap or cars, then voters hardly need to remember the traditions or history of anything. Like their favourite toothpaste, they have a favourite candidate, with no need to go deeper than that. Trump understands this viscerally. He is a billionaire businessman, after all. He just needs to sell himself. The art of persuation wins over functional literacy and discourse of the facts. Propagandists have always known this. Selling himself is something Trump is indeed good at, though critics may pick away at the fact that this-or-that Trump business venture failed, but that misses the point. Trump knows about consumers, about mindless consumption, and about tapping into that mindset.

I predict that Trump is the new face of the Republican Party. And it is no surprise, since he is at the end of a continuum beginning with Ronald Regan, proceeding to George W. Bush, who all preyed on ignorance. From here on in, I predict that the Republicans will need to be more like Trump, not the other way around. It is also difficult to say if Trump will respect the traditions of Deomcracy, or will he just unilaterally fire or imprison anyone who gets in his way.

What gets me is that none of the other candidates in the Republican party are “taking one for the team” and joining another candidate to defeat Trump if they are so shocked by Trump’s victories. It is as if the vote against Trump is being split on purpose as a coronation of Trump. Naw, can’t be. There is too much press, in nearly all news sources, going against that theory. What would anyone high up in the Republican establishment want with a billionaire businessman possibly running the Oval Office, anyway?

Latex editors: a comparison

latex_lamport
If you are using Lyx or Texmacs, this book is still an absolute must. It is THE bible for this language.

Latex is a math typesetting markup language which has been around for about 30 years. It is about as old as HTML, and runs on pretty much any kind of computer that can support a Latex compiler. I have written many term papers in it, and continue to use it to write documents. Its best feature is its ability to handle mathematical and scientific notation. It is also the official typesetting language of the American Mathematical Society. A Stanford professor named Dr. Donald Knuth invented a lower-level markup language called Tex as far back as 1976, and Latex, designed in 1985 by Leslie Lamport, was and is just a bunch of Tex macros, sophisticated enough in itself to amount to a higher-level language. You can edit complete documents and even entire books with only a background in Latex. Latex is therefore robust enough that that is all I ever use for math and science documents.

Latex documents are known for their distinctive roman font, and its clean presentation of even the most complicated formulae. The WordPress editor used in making this blog article can show formulae using the distinctive Latex fonts: m = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}} is the way Einstein’s relative mass formula is presented on my editor. This is identical to how it would appear in a Latex paper document. Unfortunately, this editor only displays inline math, so I can’t show you how it would display “presentation-style” math, where the fonts would be larger.

Over the decades, there have been editors in existence that mimic Latex in presentation of fonts and formulae. Two that I have encountered are Lyx and Texmacs.

Both Lyx and Texmacs try to distance themselves from being just WYSIWYG wrappers for the Tex/Latex language. While the metafonts displayed are the distinctive fonts known to exist in Latex are those displayed by default in these editors, saving the files saves to the format native to these separate editors. If you want Latex, you have to export your work into Latex format.

texmacs
Examples of output from Texmacs. Not bad for math, (especially if you know the Latex math codes) but not so easy to use when not in math mode.

First I’ll discuss Texmacs, since my experience with it is the most recent. I discovered Texmacs by surprise when browsing through my Cygwin menus on my laptop. While one would think that going by the name, Texmacs must have some combination of Tex and Emacs, it has dependence on neither. The editor has no resemblance to Emacs (neither in the user interface nor the keystrokes), and a selection of document options appear on the toolbar and in the menus that appear to be in line with Latex document and font options. Texmacs produces its own Texmacs code by default, and while Latex can be exported, the document in Latex may not end up looking the same. I have found that many font changes were lost, for instance.

For one who has worked with Latex for close to 30 years, I can say that nearly all of the resemblance to Latex as well as its ease of use lie in the editor’s use of math commands, although there is more dependence on the GUI. One finds that you can’t enter “\frac{3}{4}” to get \frac{3}{4}, but there is a Texmacs icon you can click that handles that.  Its weakness lay in its handling of the rest of the document. Tables were not well implemented. It appears incapable of inserting gridlines forming the borders for the table cells, for instance, even though the command for it appears to be there in the GUI. I found I needed to export the Latex code, bail out of Texmacs and edit the Latex code directly in a text editor. Another drawback of Texmacs is that while the icons cover nearly anything you would like to do in math, the fact remains that your choices of math expressions are largely limited to the buttons provided. If you are going to do something more complicated, you are going to find reason to edit the Latex code directly by hand again in a text editor. And once you do, importing the *.tex file back into Texmacs to continue editing will not guarantee that your new Latex code will be understood the way you want it. One thing that Texmacs does rather well is change fonts. Latex/Tex has ways of changing fonts internal to its language, but you are limited to only a small number of standard Tex fonts, unless you know your way around the preamble, or header part of the code. Texmacs leaves you more open to alternative installed fonts, allowing you to take advantage of the diversity of Tex fonts of which there are hundreds, created over the last 10 or more years. In fact Texmacs is the only way I know of to take advantage of alternative fonts outside of the Roman/Helvetica/monospace fonts that are at the core of Tex in a way that is even remotely as easy as a word processor. Texmacs documents will have a Latex look and feel, with greater flexibility in font choices, but as said earlier, all this is great as long as you are sticking largely to simple math or math in the toolbars, or as long as you avoid typesetting constructs outside of the math markup, such as tables.

lyx
An older version of the Lyx editor.

Lyx is, I believe a much older editor. It claims to use Latex for its typesetting, but my experience with it (although admittedly years ago) was that for serious math applications, you have to export Latex code and edit it by hand if you want to get what you want. Make sure you have Leslie Lamport’s Latex book beside you at the time. After decades of working on and off with Latex, I can never completely get the language and all its nuances in my head, and need the constant assistance of Lamport’s Latex book at my side. This also ends up being the case for Texmacs, since even basic formatting has to be changed under that editor.

In the end, these editors can save a lot of time to get the basic look and feel down for your document, but in the end you need to, at some point, hunker down and edit Latex code directly, using a text editor. I use vi, where I constantly need to bail out and compile the code and run xdvi on the compiled *.dvi file to see what it looks like and what Latex code I need to tweak next.

Both Texmacs and Lyx are on the GPL.
Texmacs source code: http://savannah.gnu.org/svn/?group=texmacs
Lyx source code: http://www.lyx.org/Download

Brewing coffee and the environment

Coffee drinking is, I suppose, the curse of the educated class. But I suppose one should not forget coffee lovers who just like the taste, and I guess they could describe all kinds of people. I love the flavour, and the caffeine “buzz”, when it happens, is a distant second in what I consider a to be a good coffee experience. That being said, I am somewhat fussy in my choices of coffee.

gold-filter
A “gold” filter, not anywhere near as expensive as gold.

There has been a recent trend in single-serve coffee makers. I brew for myself, so that is about the kind of coffee maker I would likely buy. For the longest while, I used a “gold” (possibly brass) basket, which was fine, it lacked smoothness. I also experienced paper single-serve filters, such as those made by Melitta and others. However, I didn’t like the environmental idea of disposable filters (the reason for going to gold filters).

K-Cups (and all cups like them) are a bad idea on two fronts. K-Cups themselves are not recyclable, and enough of them have been sold by Keurig (the inventors of the system) alone by the start of this year to reach 401,000 kilometers when placed end-to-end. That’s far enough to reach the moon and still make it a quarter of the way back. This is a serious waste disposal problem for all of us. This is changing slowly, but newer concoctions of K-cups depend on the recycler (meaning you) disassembling the cups before disposal. It is unlikely that enough people will want to do that. But at any rate, the aggregate amount of money spent pound for pound is more than double that of high quality ground coffee sold in pound bags: nearly $50.00 per pound. This is true regardless of the make of the coffee pod.

The attraction of coffee pods for coffee afficionados, I would suppose, is that the amount of cleaning is greatly reduced compared with an espresso machine. In addition, Keurig machines and their ilk are much less expensive than espresso machines. They produce a better coffee than ordinary perk, since each airtight packet is packaged using what is called a “modified atomosphere” to keep the grounds fresh. A modified atmosphere is usually nitrogen infused into the packet to reduce oxidative degradation in a harmless way (our atmosphere is 70% nitrogen, so you breathe in huge gulps of it all the time).

I compare them with espresso machines, since they too make one cup of coffee at a time, but usually cost hundreds of dollars and take up more counter space than single brew coffee makers like Keurig.

hb_flexbrew
The Flex Brew, taking up less space than a typical coffee maker or espresso machine.

I have seen some manufacturers now offering a K-cup compatable adaptor for ordinary ground coffee, usually offered as an optional add-on. Hamilton Beach has it as a basic feature of their “Flex-Brew” line. The latter also offers the option of doing away with K-cups altogether.

The single serve coffeemakers such as Hamilton Beach Flex Brew, feature a way to avoid K-cups and brew your own grind,  and come with a gold-ish metal filter much smaller than the illustration above, but it provides a sane way to reliably measure how much ground coffee to add compared with a Melitta filter. I have tried it, and am happy with the coffee flavour, as well as the idea that I can use any ground coffee I like, and am not limited to whatever it is K-cups have on offer, nor need I pay their exorbitant cost.

Programmatic Mathematica XIV: Generating a Hilbert Matrix

I briefly covered Gaussian Elimination in Mathematica for small matrices. You can easily google gaussian elimination or consult an introductory Linear Algebra textbook if you like to know more. Going a whole lot higher in this discussion, we will try to see how we can shoe-horn Mathematica to “ill-conditioned” matrices such as the Hilbert Matrix. Hilbert matrices have cells with formulae H(i,j) = 1/(i+j-1). The determinant of the matrix can be had by knowing if a function f(n) can be defined such that f(n) = 1!\cdot2!\cdot3!\cdot ... \cdot(n-1)!, then the determinant of a Hilbert Matrix has the formula: \det_{H_n} = \frac{(f(n))^4}{f(2n)}. It may not look like it, but the numerator increases much faster than the denominator, causing the fraction to approach zero when n is large.

A sort of matrix can be made in Mathematica:

In[12]:= HilbertMatrix[15]

Out[12]= {{1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15},
{1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16},
{1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17},
{1/4, 1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18},
{1/5, 1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19},
{1/6, 1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20},
{1/7, 1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21},
{1/8, 1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22},
{1/9, 1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23},
{1/10, 1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24},
{1/11, 1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24, 1/25},
{1/12, 1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24, 1/25, 1/26},
{1/13, 1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24, 1/25, 1/26, 1/27},
{1/14, 1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24, 1/25, 1/26, 1/27, 1/28},
{1/15, 1/16, 1/17, 1/18, 1/19, 1/20, 1/21, 1/22, 1/23, 1/24, 1/25, 1/26, 1/27, 1/28, 1/29}}

This square matrix is classified as a Hilbert matrix, famous for being non-trivial to solve, even by computer. I fixed the output using the Rationalize[] function to show that what the Table[] function actually generated was a matrix of rational numbers. In addition, the output looks less like a matrix and more like a two-dimensional array. This tells you the implementation of matrices in Mathematica: it sees your matrix as “a kind of table”, or more accurately, “a kind of array”. The same array could have been generated repurposing the Table[] command:

    m = Rationalize[N[Table[1/(i + j - 1), {i, 15}, {j, 15}]]]

This kind of formula also reminds us that the cells H_{i,j} in the Hilbert Matrix are each made up of the values 1/(i + j + 1), giving it the familiar pattern of fractions we see.

Mathematica can present this as an actual matrix if pressed, using the MatrixForm[] command. Mathematica 10 even provides a direct HilbertMatrix[] command:

hilbert_square_matrixThis command was invoked to produce the same Hilbert Matrix that we started earlier in this example. Mathematica 10 appears to choke on finding the determinant of this matrix (Det[] command), but what it does is echoes back your Det[] command with the expanded matrix. When that output is re-run, Mathematica returns with the determinant for this 15-by-15 matrix:

1/9446949653634668571373109351236989087975627994978804269595338137635022705891424600259116300098090513203200000000000000000000,

the denominator being a 123-digit number. As stated earlier, the determinant quickly goes to zero, but a square dimension of 15 is hardly a large matrix, suggesting that you need not go far to see this being demonstrated. Even the 5×5 version of the Hilbert Matrix has a determinant in the hundreds of billionths. Making it 3×3, the determinant is still 1/2160.