1. ARC assembly: Kathleen Booth, 1950
2. Address: Kateryna Yushchenko, 1955
3. COBOL: Grace Hopper, 1959
4. FORMAC: Jean Sammet, 1962
5. Logo: Cynthia Solomon, 1967
6. CLU: Barbara Liskov, 1974
7. Smalltalk: Adele Goldberg, 1980
8. BBC BASIC: Sophie Wilson, 1981
9. Coq: Christine Paulin-Mohring, 1991
Matlab is a kind of de facto standard in the scientific world for numerical computing. From a language theoretical perspective it is (IMO) a very poorly thought out language with frequently poor performance and a high degree of non-orthogonality. But as long as you stick with vectorised operations Matlab can perform tolerably well. And not every aspect of the language is poorly designed. It has a very compact notation for dealing with arrays and their slices. And it's easy to call C code when needed.
Julia is an attempt to fix many of the problems with Matlab. This means it does slavishly copy many features of Matlab. But it uses LLVM on the back end to generate high performance code. What's appealing is that you can use a mixture of imperative style loops with array updates alongside vector operations like those in Matlab and know you'll likely get good performance. This makes it very appealing to me.
Most programming languages I have learnt have given me a new perspective on the world inside a computer. This includes languages from Forth, and assembly language to Python and Haskell. Just learning them expands your mind in some way. Julia is the first non-mind-expanding language I've learnt. If I hadn't used APL, numpy and Matlab before I might have found its vectorised operations revolutionary. But I had, and Julia doesn't seem to offer anything new. So in that respect it's a bit boring.
But it's not completely without some interesting aspects. Like Lisp it's homoiconic. That's slightly surprising for a language that looks superficially a lot like Fortran.
And although it's described as dynamically typed, in actuality it makes clear that the distinction between static and dynamic typing isn't so straightforward. Just like with a dynamically typed language like Python, just about every function you write is polymorphic in the sense that it's not an error to pass in arguments of just about any type. But Julia's one truly interesting feature is that once it knows the type of the arguments it attempts to monomorphise the function (ie. specialise it to the required types) and compile it just-in-time. This gives the best of both the dynamic and static worlds. Except sometimes it gives the worst of both worlds too.
For example some type related errors get uncovered during compilation before the code is run. This is cool for a dynamic language. But sometimes you still get the disadvantage of having to wait for code to run before a type error is discovered as well as the disadvantage of having to wait for your code to compile every time. Worst of both worlds!
Because it's a JIT compiler Julia can be painfully slow to import libraries. This is why I like it only for small tasks. I hope this is eventually fixed as I don't think it's an inherent problem with the language.
Julia has many annoying niggles. For example arrays start at 1. And within matrices spaces are used as a separator. So, for example, let's say f is a function mapping ints to ints. "f(x)" is a valid expression. Outside of an array expression, "f (x)" is a perfectly good way to write the same thing. You can write a two element matrix of ints as "[1 2]" where there is a space separator between 1 and 2. But "[f (2)]" is not the matrix of integers containing f(2). It is in fact a two element inhomogeneous matrix whose first element is a function and whose second element is an integer. Yikes!
But for doing things like testing the divide and concur circle packing I mentioned recently, it's hard to beat how fast I got that working in Julia.
And one more thing: functions are first class objects in Julia and forming closures is fairly easy.
Hewlett-Packard was once at the cutting edge of technology. Now they make most of their money selling servers, printers, and ink... and business keeps getting worse. They've shed 40,000 employees since 2012. Soon they'll split in two: one company that sells printers and PCs, and one that sells servers and information technology services.
The second company will do something risky but interesting. They're trying to build a new kind of computer that uses chips based on memristors rather than transistors, and uses optical fibers rather than wires to communicate between chips. It could make computers much faster and more powerful. But nobody knows if it will really work.
The picture shows memristors on a silicon wafer. But what's a memristor? Quoting the MIT Technology Review:
Perfecting the memristor is crucial if HP is to deliver on that striking potential. That work is centered in a small lab, one floor below the offices of HP’s founders, where Stanley Williams made a breakthrough about a decade ago.
Williams had joined HP in 1995 after David Packard decided the company should do more basic research. He came to focus on trying to use organic molecules to make smaller, cheaper replacements for silicon transistors (see “Computing After Silicon,” September/October 1999). After a few years, he could make devices with the right kind of switchlike behavior by sandwiching molecules called rotaxanes between platinum electrodes. But their performance was maddeningly erratic. It took years more work before Williams realized that the molecules were actually irrelevant and that he had stumbled into a major discovery. The switching effect came from a layer of titanium, used like glue to stick the rotaxane layer to the electrodes. More surprising, versions of the devices built around that material fulfilled a prediction made in 1971 of a completely new kind of basic electronic device. When Leon Chua, a professor at the University of California, Berkeley, predicted the existence of this device, engineering orthodoxy held that all electronic circuits had to be built from just three basic elements: capacitors, resistors, and inductors. Chua calculated that there should be a fourth; it was he who named it the memristor, or resistor with memory. The device’s essential property is that its electrical resistance—a measure of how much it inhibits the flow of electrons—can be altered by applying a voltage. That resistance, a kind of memory of the voltage the device experienced in the past, can be used to encode data.
HP’s latest manifestation of the component is simple: just a stack of thin films of titanium dioxide a few nanometers thick, sandwiched between two electrodes. Some of the layers in the stack conduct electricity; others are insulators because they are depleted of oxygen atoms, giving the device as a whole high electrical resistance. Applying the right amount of voltage pushes oxygen atoms from a conducting layer into an insulating one, permitting current to pass more easily. Research scientist Jean Paul Strachan demonstrates this by using his mouse to click a button marked “1” on his computer screen. That causes a narrow stream of oxygen atoms to flow briefly inside one layer of titanium dioxide in a memristor on a nearby silicon wafer. “We just created a bridge that electrons can travel through,” says Strachan. Numbers on his screen indicate that the electrical resistance of the device has dropped by a factor of a thousand. When he clicks a button marked “0,” the oxygen atoms retreat and the device’s resistance soars back up again. The resistance can be switched like that in just picoseconds, about a thousand times faster than the basic elements of DRAM and using a fraction of the energy. And crucially, the resistance remains fixed even after the voltage is turned off.
Getting this to really work has not been easy! On top of that, they're trying to use silicon photonics to communicate between chips - another technology that doesn't quite work yet.
Still, I like the idea of this company going down in a blaze of glory, trying to do something revolutionary, instead of playing it safe and dying a slow death.
Do not go gentle into that good night.
For more, see this:
• Tom Simonite, Machine dreams, MIT Technology Review, http://www.technologyreview.com/featuredstory/536786/machine-dreams/
wavedrom - Digital timing diagram in your browser - Google Project ...
Project Information. Activity High; Project feeds; Code license; MIT License; Content license; Creative Commons 3.0 BY-SA; Labels waveform,
You are NOT a Software Engineer! - chrisaitchison.com
You are NOT a Software Engineer! You are not a Software Engineer. You do not build skyscrapers. You do not build bridges. You grow gardens.
ISO/IEC 14882:2011 - Information technology -- Programming languages -- C++
International Standards for Business, Government and Society. Home; Products; Standards development; News and media; About ISO. For ISO Memb
Intro to AI - Introduction to Artificial Intelligence - Oct-Dec 2011
Stanford School of Engineering - Stanford Engineering Everywhere
courses. SEE programming includes one of Stanford's most popular engineering sequences: the three-course Introduction to Computer Scienc