Start a hangout

## Profile

Jay D'Lugin, MD, MS

266 followers|770,673 views

AboutPosts

## Stream

### Jay D'Lugin, MD, MS

Shared publicly -Biometrics are not secret and won't even stay obscure for long.

Jan Krissler used high resolution photos, including one from a government press office, to successfully recreate the fingerprints of Germany’s defence minister

1

### Jay D'Lugin, MD, MS

Shared publicly -Awesome.

Hey, do you use ? No? Well, what do you use? Weird, no one else I know uses that. Here, just create an account real quick. Please? Yeah, we're all tired of... by Bertel King, Jr. in Applications, News

2

### Jay D'Lugin, MD, MS

Shared publicly -**A Last Gift from Ramanujan**

Srinivasa Ramanujan is a legend of the mathematics world. The son of a shop clerk in rural India, he taught himself mathematics, primarily out of a book he borrowed from the library. The math that he did started out as rediscovering old results, and then became novel, and ultimately became revolutionary; he is considered to be one of the great minds of mathematical history, someone routinely mentioned in comparison with names like Gauss, Euler, or Einstein.

Ramanujan's work became known beyond his village starting in 1913, when he sent a letter to the British mathematician G. H. Hardy. Ramanujan had been spamming mathematicians with his ideas for a few years, but his early writing in particular tended to be rather impenetrable, of the sort that today I would describe as "proof by proctological extraction:" he would present a result which was definitely true, and you could check that it was true, but it was completely incomprehensible how he got it. But by the time he wrote to Hardy, both his clarity and the strength of his results had improved, and Hardy was simply stunned by what he saw. He immediately invited Ramanujan to come visit him in Cambridge, and the two became lifelong friends.

Alas, his life was very short: Ramanujan died at age 32 of tuberculosis (or possibly of a liver parasite; recent research suggests this may have been his underlying condition), less than six years after his letter to Hardy.

When we talk about people whose early death was a tremendous loss to humanity, there are few people for whom it's as true as Ramanujan, and a recent discovery in his papers has just underlined why.

This discovery ties together two stories separated by centuries: The "1729" story, and the great mystery of Pierre Fermat's last theorem.

The 1729 story comes from a time that Hardy came to visit Ramanujan when he was ill. In Hardy's words:

"I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. 'No', he replied, 'it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.'"

This has become the famous Ramanujan story (and in fact, 1729 is known to this day as the Hardy-Ramanujan Number), because it's just so ludicrously Ramanujan: he

*did*have the reputation of being the sort of guy to whom you could mention an arbitrary four-digit number, and he would just happen to know (or maybe figure out on the spot) some profound fact about it, because he was

*just that much of a badass.*

The other story is that of Fermat's Last Theorem. Pierre de Fermat was a 17th-century French mathematician, most famous for a theorem he

*didn't*prove. In 1637, he jotted down a note in the margins of a book he had, about a generalization of the Pythagorean Theorem.

From Pythagoras, we know that the legs and hypotenuse of a right triangle are related by a²+b²=c². We also know that there are plenty of sets of integers that satisfy this relationship -- say, 3, 4, and 5. Fermat asked if this was true for higher powers as well: that is, when n>2, are there any integers a, b, and c such that aⁿ+bⁿ=cⁿ? He claimed that the answer was no, and that "he had a truly marvelous proof of this statement which was, unfortunately, too large to fit in this margin."

The consensus of mathematicians ever since is that Pierre de Fermat was full of shit: he had no such proof, and was bluffing.

In fact, this statement -- known as Fermat's Last Theorem, as his notes were only discovered after his death -- wasn't proven until 1995, when Andrew Wiles finally cracked it. Wiles' success was stunning because he didn't use any of the traditional approaches: instead, he took (and significantly extended) a completely unrelated-seeming branch of mathematics, the theory of elliptic curves, and figured out how to solve this. That theory is also at the heart of much of modern cryptography, not to mention several rather unusual bits of physics. (Including my own former field, string theory)

And so these two stories bring us to what just happened. A few months ago, two historians digging through Ramanujan's papers were amused to find the number 1729 on a sheet of paper: not written out as such, but hidden in the very formula which expresses that special property of the number, 9³+10³=12³+1.

What turned this from a curiosity into a holy-crap moment was when the rest of the page, and the pages that went with it, suddenly made it clear that Ramanujan hadn't come up with 1729 at random: that property was a side effect of him making an attempt at Fermat's Last Theorem.

What Ramanujan was doing was looking at "almost-solutions" of Fermat's equation: equations of the form aⁿ+bⁿ=cⁿ±1. He had developed an entire mechanism of generating triples like these, and was clearly trying to use this to home in on a way to solve the theorem itself. In fact, the method he was using was precisely the method of elliptic curves which Wiles ended up using to successfully crack the theorem most of a century later.

What makes this completely insane is this: Wiles was taking a previously-separate branch of mathematics and applying it to a new problem.

But

*the theory of elliptic curves wasn't even invented until the 1940's.*

Ramanujan was making significant progress towards solving Fermat's Last Theorem, using the mathematical theory which would in fact prove to be the key to solving it, while making up that entire branch of mathematics sort of in passing.

*This*is why Ramanujan was considered one of the greatest badasses in the history of mathematics. He didn't know about 1729 because his head was full of random facts; he knew about it because, oh yes, he was in the middle of doing yet another thing that might restructure math, but it didn't really solve the big problem he was aiming at so he just forgot about it in his stack of papers.

I shudder to imagine what our world would be like if Ramanujan had lived a longer life. He alone would probably have pushed much of mathematics ahead by 30 or 40 years.

If you want to know more about elliptic curves, Fermat, and how they're related, the linked article tells more, and links to more still. You can also read an outline of Ramanujan's life at https://en.wikipedia.org/wiki/Srinivasa_Ramanujan , and about Fermat's Last Theorem (and why it's so important) at https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem .

Ramanujan's manuscript. The representations of 1729 as the sum of two cubes appear in the bottom right corner. The equation expressing the near counter examples to Fermat's last theorem appears further up: α3 + β3 = γ3 + (-1)n. Image courtesy Trinity College library. Click here to see a larger ...

3

### Jay D'Lugin, MD, MS

Shared publicly -A few years ago, +Randall Munroe of XKCD did a survey of what people called various colors, and open-sourced the resulting dataset. The diagram below is a very interesting visualization of it. The X-axis represents hue, scanning over (the RGB-representable part of) the color spectrum. The Y-axis shows which names were most common for that particular color.

(Note that different saturations and values are simply stacked up vertically, which is why orange and brown are on top of one another)

What's interesting is that some colors seem to have much more agreed-upon names than others. Green, blue, and purple seem to create the most consensus. Red, brown, orange, and yellow create significantly less so. I wouldn't be surprised if this were tied to the way our eyes work: our green and blue cones (the color sensors in our retinas) are very color-specific, while the red cone is very broad, and so the span of "red" things could get a messier name. There's also the fact that red blends in to pink and thence into purple, colors which our eyes actually are detecting fairly indirectly: while teal really is between the frequencies of blue and green, purple isn't between red and blue at all, and our eyes process it by some creative cheating at the data processing stage.

Another phenomenon you can see on this graph is the presence of certain colors which appear to be well-defined "things:" the spikes in the graph at teal, yellow, green, and so on suggest that there's a color there that we agree is distinct from other colors. There's an interesting debate about the extent to which the set of these peaks is cultural versus biological, and the answer may well be different for different peaks. For example, red and green seem to be defined pretty much across the board, but teal (and its variants) don't appear in all languages. There does seem to be some kind of well-defined hierarchy, in that everyone has words for the most basic colors, and more refined color names are added in tiers. That's true not just across cultures but within them: people whose work involves detailed color matching have much richer vocabularies for this than people who don't, for obvious reasons.

Via +Grizwald Grim

(Note that different saturations and values are simply stacked up vertically, which is why orange and brown are on top of one another)

What's interesting is that some colors seem to have much more agreed-upon names than others. Green, blue, and purple seem to create the most consensus. Red, brown, orange, and yellow create significantly less so. I wouldn't be surprised if this were tied to the way our eyes work: our green and blue cones (the color sensors in our retinas) are very color-specific, while the red cone is very broad, and so the span of "red" things could get a messier name. There's also the fact that red blends in to pink and thence into purple, colors which our eyes actually are detecting fairly indirectly: while teal really is between the frequencies of blue and green, purple isn't between red and blue at all, and our eyes process it by some creative cheating at the data processing stage.

Another phenomenon you can see on this graph is the presence of certain colors which appear to be well-defined "things:" the spikes in the graph at teal, yellow, green, and so on suggest that there's a color there that we agree is distinct from other colors. There's an interesting debate about the extent to which the set of these peaks is cultural versus biological, and the answer may well be different for different peaks. For example, red and green seem to be defined pretty much across the board, but teal (and its variants) don't appear in all languages. There does seem to be some kind of well-defined hierarchy, in that everyone has words for the most basic colors, and more refined color names are added in tiers. That's true not just across cultures but within them: people whose work involves detailed color matching have much richer vocabularies for this than people who don't, for obvious reasons.

Via +Grizwald Grim

I'm an incorrigible data hound. So, once the tempting aroma of XKCD's color name survey results tickled my nose, I had no choice – but to run to the dining room, stand up on my hind legs, and yank that statistical top sirloin off the table. Om nom nom yum yum yum!

1

### Jay D'Lugin, MD, MS

Shared publicly -Got a new calculator today. So now the question is: Which will be the daily-use calculator, this or my HP48SX? :)

For those who have never seen one, this is a Felix arithmometer, the calculator which was at the heart of Soviet science and engineering for decades. They were essentially clones of the Swedish Odhner arithmometer: after the revolution, the Odhners left Russia rather quickly, but in 1924, Felix Dzherzhinsky – head of the Cheka, the forerunner of the NKVD and KGB – decided that Russia needed calculators, and so became a factory runner in his copious free time. This device, in a few variations, remained the reliable and omnipresent mainstay of the Eastern Bloc until the 1970's.

And I recall Edward Teller explaining to me how, in his youth, we did not have all of these megaflops and gigaflops; we had a one flop computer. "Flops" is short for "floating-point operation per second," a measure of the speed of a computer – and this one is indeed about one flop. You use the row of ten small levers on the top to key in your input; you turn the handle away from you to add, towards you to subtract. Output shows up on the counter below, with the count of operations in the separate register on the left. By sliding the bottom mechanism left and right, you can shift the decimal point, and thus easily multiply or divide; various tricks allow one to take square roots, trig functions, and so on. (I have found an excellent book on the subject, but unfortunately it exceeds my rather limited Russian)

It makes a lovely clanking noise as it operates, and a "bing!" from the bell whenever you cross zero as you add or subtract.

This particular device was made in the Kursk factory, the second one to make Felixes; I'm guessing that it's from the 1940's or 1950's, but absent a list of serial numbers or production schedules, that's mostly a guess. Rather amazingly, it's in perfect working order; a quick cleaning and some oil and it's like it just came off the line.

I feel like I need to compute a spacecraft trajectory or model the hydrodynamics of a nuclear explosion or something.

For those who have never seen one, this is a Felix arithmometer, the calculator which was at the heart of Soviet science and engineering for decades. They were essentially clones of the Swedish Odhner arithmometer: after the revolution, the Odhners left Russia rather quickly, but in 1924, Felix Dzherzhinsky – head of the Cheka, the forerunner of the NKVD and KGB – decided that Russia needed calculators, and so became a factory runner in his copious free time. This device, in a few variations, remained the reliable and omnipresent mainstay of the Eastern Bloc until the 1970's.

And I recall Edward Teller explaining to me how, in his youth, we did not have all of these megaflops and gigaflops; we had a one flop computer. "Flops" is short for "floating-point operation per second," a measure of the speed of a computer – and this one is indeed about one flop. You use the row of ten small levers on the top to key in your input; you turn the handle away from you to add, towards you to subtract. Output shows up on the counter below, with the count of operations in the separate register on the left. By sliding the bottom mechanism left and right, you can shift the decimal point, and thus easily multiply or divide; various tricks allow one to take square roots, trig functions, and so on. (I have found an excellent book on the subject, but unfortunately it exceeds my rather limited Russian)

It makes a lovely clanking noise as it operates, and a "bing!" from the bell whenever you cross zero as you add or subtract.

This particular device was made in the Kursk factory, the second one to make Felixes; I'm guessing that it's from the 1940's or 1950's, but absent a list of serial numbers or production schedules, that's mostly a guess. Rather amazingly, it's in perfect working order; a quick cleaning and some oil and it's like it just came off the line.

I feel like I need to compute a spacecraft trajectory or model the hydrodynamics of a nuclear explosion or something.

1

COOL.

### Jay D'Lugin, MD, MS

Shared publicly -An entry door of a commercial building.

Welded finishing nails.

Welded finishing nails.

2

### Jay D'Lugin, MD, MS

Shared publicly -Drivers? Where we're going, we don't need drivers!

1

Story

Tagline

Physician, medical informaticist, UI/UX designer, beagle lover

Links

YouTube