Profile cover photo
Profile photo
Mark Mullin
151 followers -
Small chance of success, virtual certainty of death..... what's not to like
Small chance of success, virtual certainty of death..... what's not to like

151 followers
About
Mark's posts

Post has attachment
As I observed on twitter, the scientific methods pesky insistence on repeatable results is annoying at best. That said, meet my new little friend. This is a neural model that fuses the images from multiple cameras into one and preserves the inherent 3D information that the vision parallax provides. Beautiful to look at, a work of art in motion, and just plan nasty to the core. For those that actually pick it apart, F is a set of feature information representing the derivative of measured error over time where a change in time is coupled with a change in control parameters, sub d is the magnitude error between features, and sub a is the angular error between features. If you’re still here what I think is really cool is that while there may be a bunch of neurons involved, at the end of the day, each level is pretty simple minded and it only takes 24 neurons per camera at the top to solve the issue.

Photo

Post has attachment
OK, so does anyone on the Tango team have anything to say about TechCrunch not even bothering with Tango when they reviewed Google's AR ambitions ?  http://techcrunch.com/2015/12/12/lifelike-adverts/?ncid=rss

Well, the following email was certainly a little surreal -  Can't say I was impressed, either by the spam nature of it, or by the (to me) disjointed requirements.  I hope this isn't really the way the team is being augmented, even with contractors.  It only adds to my ever growing list of concerns......
------------------------------------------------------
Mathpal, Reema (Tekmark Global Solutions) to me
12:26 PM
Hi Mark,

 

I am contact you for a Senior Application Engineer role with my client GOOGLE in Mountain view CA to work with their Project Tango team.

12 months + contract

Market rate!!

 

Please review and respond me ASAP to discuss this opportunity and then submit your resume. Looking forward to talking to you soon. Thanks!

 

Job Description:

Our client is seeking a Senior Application Engineer.

 

Summary:

-Project Tango is a set of mobile technology that gives devices a human scale understanding of space and motion. It uses computer vision technology to perform motion tracking and object detection.

-You will be part of the team that defines the next generations of apps, builds, and ships the apps through Google Play. As such, the role requires a high sense of detail and strong interest in working on new technologies.

-You will be working closely with the core engineering team to implement the latest features and help guide the roadmap for the Project Tango apps.

 

Responsibilities

- Maintain communications and problem solving between local and remote teams.

- Manage software code base.

- Manage releases process together with the QA team.

- Develop apps and prototypes.

- Work with partners and vendors to document, design, and guide implementation of apps.

 

Required Qualifications

- MS degree in Computer Science, related technical field or equivalent practical experience.

- Software architecture and design for apps and/or libraries.

- 4+ years of developing mobile applications.

- Use Android Java Framework API and Android JNI daily.

- Working knowledge of cloud based protocols and systems (HTTPS, HTTP, JSON, Protobufs, App Engine, Compute Engine, AWS, etc.).

 

Nice to have qualifications:

- Working knowledge of source code versioning and builds systems (Git, Github, make, gradle, ant, Jenkins, etc.

- Experience with at least one of the following: software architecture, library/framework development, or machine learning.

- Experience with vector and motion graphics, including Android Animation Framework, OpenGL, or Unity. - Fluency in one or more of the following: Python, C, C++.

- Excellent leadership, communication, project management, and organizational skills.

- Knowledge of material design standards and principles.

 

So I gave a talk on Machine Learning for the .Net Boston Code Camp (#boscc) - it was kind of bittersweet - it was good to give back to a community that had done much for me over the last 15 years, but somewhat sad, as I have now, with the close of that talk, utterly abandoned the .net stack.

So I did manage to finish the machine learning presentation.  Grossly oversimplified, however I think it aligns to the objective.  We need more professional developers to help move us from the flexibility and gruesome execution speed of Python to faster execution environments, so more applications everywhere can take advantage of what the research community has proven out.  For example, did you know that you can use a support vector machine to watch exactly what your user is doing in real-time and have the interface continously adapt to this knowledge ?  For better or worse, the deck is here - https://drive.google.com/open?id=0B0-Mz689ovpnMGdVbUEyUTVzR0k

To do the Cloud, you must be the Cloud.

I mentioned in an earlier post that I had been interviewing at a number of very smart companies lately, where all of them are truly pushing the boundaries of how computers are meant to be used in various environments.  For the ones that truly are stellar, there is one commonality.  They don't just do the cloud.  They live in the cloud from start to finish.  There are no hulking desktops, only powerful mobile Apple laptops.  There are no large enclosed areas housing any kind of development infrastructure, it's all in the cloud.
I can appreciate this, as exactly the same thinking applies with respect to the design of secure software.  I receive far too much credit for being far too knowledgeable about security because of a simple conclusion I reached a long time ago.  The first step is simple.  Don't do insecure things, to wit, don't use unencrypted communications.  Ever.  Not even in R&D. Never ever ever.  The result is that you will never wake up in a cold sweat wondering if the right security flags got thrown in your last release.  In my world, crypto was always on. Communications were either secure or unavailable.  In what I have seen for best practices in the cloud, the rule is the same.  The truly knowledgeable players know that the cloud is not a destination, it's where you live.
Let’s consider some issues that confront business with respect to the cloud.  The foremost is pricing.  Customers do not respond well to prices that vary by usage, yet the business only knows at the end of each month what that customer cost.  Yes, there's lots of ways of breaking it down, but there is risk there that isn't there if you're hammering on your own capital equipment.  That said, the business has also probably been negotiating its way through ever more complex internet provider contracts over the years, so this isn't the first fixed/variable cost IT problem that's ever shown up.  The issue is that the cloud is new and mysterious and there's not a lot of data to project from.
In developing an application, it’s a given that employee costs will dwarf any other expense.  That is exactly why it is best for the business to move the entire effort into the cloud, for no other reason than the data acquired will be invaluable in pricing, and any capex with respect to infrastructure will be eliminated.  Early development will give you smaller measures to allow you to estimate risk of 'oops, something happened that made the machine go permanently crazy and run flat out for 24 hours'.  Later development gives you detailed metrics from all of the software testing artifacts.  Some of these artifacts specifically exist as test cases for edge conditions, i.e. beyond what is considered the boundary of the systems capabilities, and tests for what the power customers do, i.e. ask for lots and lots of complex operations.  With actual cost values from the latter development and some feeling for risk with earlier development, there's enough information in hand to factor a variable cost cloud solution into a competitive fixed price.
In considering the creation of a cloud based solution, a 'pretty much' cloud based solution is as pointless as a 'pretty much' secure application. Consider the cloud.  Large numbers of geographically distributed machines with very fine grained capability/cost selection, capable of providing world class performance all over the world, and tended to every second of every day by highly skilled highly dedicated teams.  Now consider your proprietary IT infrastructure.  I'll wait....  
If you were to ever end up entangling some poor beast in your own IT infrastructure with the cloud, now you have one machine that has to serve or interact with machines in the cloud that effectively never go down, can duplicate themselves all over the world in response to live customer demand, while your machines are fixed in space, cannot get bigger or smaller whenever they want, and it's your problem if the CPU just burst into flame.  The instant you have a 'pretty much' cloud based solution, you're going to have to write any SLA with respect to the capabilities of your machines. Unless you are very large indeed, or have truly peculiar requirements, the economies of scale pretty much prove you've got a losing proposition out of the gate.
I could go on, but why bother?  This isn't a direct argument for why one should go to a pure cloud solution, rather it is for how to go to a cloud solution.  With respect to pricing the offering, by definition you will collect the test data necessary to price within any tolerance you want, and initial experience will allow you to adjust that price based on your appetite for risk.  With respect to delivering the offering, it's an all or nothing approach unless one wants to instantly become the weak link in the chain. Leave the workstations there if you have to, and let them talk to all the legacy systems.  For the cloud, every machine resource involved in the visualization, construction, and delivery of any solution should be in the cloud as well.  
In conclusion, consider it this way.  Though the stories are endemic, nobody wants to bug a key developer on their long delayed vacation and make them frantically try to remember some arcane detail. It's always that the situation is desperate.  You can never know when something might go wrong.  Imagine a world where that developer can provide the 15 minutes of arcane data and happily return to their vacation. Because they fixed what you needed, right from their phone.  And yes, it is more secure than if you locked everything up in your building.

Is Apple about to recover their position?

Interesting headline, no?  This is not at all an argument that Apple is not viable, as the raw cash they have on hand would qualify them as a successful international banking power, never mind the obvious market presence they have in the mobile space.  Rather, this is an observation that a business that has long been considered out of their grasp may actually be returning to the fold, and this would drive them to ever higher and broader levels of market dominance.  It's a change that seems to be happening from the ground up, rather than the top down.
I was once closely involved with Apple, when they still had a multicolored logo and the Macintosh was a harbinger of the world to come.  We had a bit of a falling out due to something I had no control over that made them very mad.  By 1995 I'd given up on them and switched over to Windows full time, feeling at that time that Microsoft, having rammed core bits of NT capabilities into the whimpering heart of Windows 95, was leaving them behind.  When I worked with Apple, the rules were simple.  You got it running on Macintosh, it looked beautiful, it ran well, and everybody was happy.  Then you ported what you had to Windows, and hoped it wasn't an utter disaster.
I remember the day it all changed.   When all was well, many advertising goofs were made in the various trade rags, where Windows products were advertised with Macintosh screenshots.  The windows users complained, but everyone pretty much ignored them.  Then that fateful ad ran in a Mac trade rag where the app had a Windows interface. The user community went into a frightful rage, and then the truth leaked out.  The ad ran because the Windows variant got built first.  If Steve Jobs had not returned, my feeling is that Apple would have most certainly died by the early 2000's. He did return, and the rest of that story is written on the world stage.  All that was lost from the old days was the Macintosh as a wide spread development platform with a firm grasp on developer mindshare. This is the market I am referring to, and were Apple to capture it, their future may be cemented as Microsoft's becomes questionable.
I've been on a number of interviews recently at organizations that are extremely good, are chasing very hard problems, and for the most part, have enough proof in hand that their success is more of a question about final growth, polish, and a mature sales channel.   Unsurprisingly, their back ends target various UNIX variants, and all of the cool things that run on them.  Java 8 is the language of choice, and the subsystems they use all come from the common group of suspects, NoSQL databases like MongoDB, complex JAX-RS web service architectures, and responsive front ends using systems such as Backbone. Furthermore, they are all very smart about decentralized computing ("the cloud") and all have vast hordes of machines scattered throughout the large ecosystems.  In short, the UN*X powered open source movement is continuing to broaden, driven by both the advantages of shared work (open source) and significantly reduced licensing costs (open systems).
Creating new software hasn't really changed since we first started.  The tools have improved immeasurably, however problem complexity has more than kept pace with tool evolution.  Software has to be conceived, created, and tested.  In all the small streams coming from each person feeding the product river, each person makes something, they have to see if it works as they expected, and they have to move it on down the line. In the worlds I have been recently privileged to visit, one thing is true.  Macintoshes all the way.  Nary a Windows machine to be seen. In fact every machine is mobile, as all of the Macintoshes appeared to be laptops.
On examination, it makes perfect sense.  Most of the heavy lifting development machines are in the cloud, right along with everything else.  As long as the connection isn't congested, the local workstation has real time access, interaction, and control over all of the resources it is using and there is no friction with respect to distance.  Furthermore, as the heart of the Macintosh is both UNIX based and visible, there is very little impedance between systems in this world composed of Macs and UNIX based systems.  In the Microsoft world, extensive discussions with recalcitrant computers are required to get them to recognize UNIX based systems at all, and an almost legendary amount of work is needed to recognize UNIX computers as first class citizens.
There is a case to be made that the Macintosh may be slowly (for now) returning to become the dominant platform used by software developers.  In point of fact, it doesn't even need to become dominant, it just needs to attract enough developers that it becomes the focal point as it was in the early '90s.  There were always more Windows machines, but most of the leading apps had deep Macintosh roots.  I clearly remember that the entire Mac community was frustrated with the belief this was primarily due to the 'mac tax', and why didn't Apple fix it so we could really dominate? Regardless, there is little question that Macintosh developed applications led and Windows developed applications followed, if the software was meant to excel in the world of graphical user interfaces.
Today, the environment within which the next generations of software is being developed is highly mobile, where many of the resources that were once physically located within company buildings are now accessible anywhere with no impedance or delay.  Macintosh devices both offer high touch focus on the physicality of the device, and low impedance to integrating with an extremely wide range of cloud based resources. Does this mean that Apple will recover their position as the acknowledged Alpha of developer platforms?  Possibly so.  However this time I think things may be a little different.  I look at this and consider the "mac tax".  What is different is that the Macintosh is now the physical manifestation of primarily cloud based resources, and anything I do as a developer on it is probably intended for the cloud anyway. I pay 30% more for a beautifully designed laptop, but that 30% is strictly for the device that serves as my touchstone with the modern software world?  Seems a no brainer to me.  What is different this time is that not only does the "mac tax" not apply, it isn't even there.  It is now the obvious fee for value received. I can say it would be 'interesting' to watch a world where Microsoft wound down to irrelevance and Apple maintained dominance, all because of something that Microsoft often scoffed.
So there you have it.  Is the nature of software development and delivery changing such that the Macintosh will once again blaze the way? All I can say for certain is the thought weighs heavily with respect to the purchase of a new laptop.  I have watched my own systems over time, and for someone so heavily invested in the Microsoft technology stack, and the data is clear.  One can argue over projections of the rate at which those systems are converting to UNIX, however one cannot argue that it's the growth rate of the curve that's in question, not whether or not the process is happening. As far as I can tell, I agree with what I've seen.  Macs look like a winner again.

Post has attachment
Photo

Post has attachment
OK, so some of you know I worked to make a wallfinder for Tango, and subsequently filed a provisional patent on it.  In reviewing what I think the lifetime of that stunt is (short, because it really is a reduction of the search space, and processors are still getting faster on the mobile side) and the window for paying a lot of money to a lawyer, I'm thinking of just wrecking the patent and publishing the work.  That said, moving some of the code off of the server and onto Tango is going to be an exercise in pain and misery, for example, I'm pretty certain that Eigen (http://eigen.tuxfamily.org/index.php?title=Main_Page) is going to be part of it (PCA or equivalent is vital).
If you like the idea, upvote the post.  If you'd like to help up front, where you get neck deep in all the scary bits, pick migrating the nasty math, if you'd like to help when there is a library, however sketchy, pick helping test.  Thanks
10 votes
-
votes visible to Public
40%
Help Migrate Nasty Math
60%
Help Test after Migrating Core Logic

Post has attachment
Paradox in modern software systems

In my interviews these days as I look for a new home, I get asked a number of hypothetical questions about REST API design.  The response I really want to give is not the one they want to hear, and I am a practical cat.  That said, I am now suffering from an overwhelming desire to give the response I want to, which pretty much boils down to 'are you crazy?'.

Consider a simple REST API implementing basic social messaging functionality.  Lets dispense with the easier elements of the exercises;
1) There exists a URI messages/<messageId> which represents the resource for a distinct singular message, and the common HTTP verbs are realized as;
a) GET will net you the pre-existing message or an error depending on any needed credentials, content type, etc in the header
b) PUT is not allowed - messages are considered immutable
c) POST will create a new message ID for the posed message and persist it - of course XML and JSON are supported via content type negotiation. The new message ID is available as a simple scalar result given a non-error return.
d) DELETE is not allowed - messages never disappear - think before you send :-)

2) Any message can have comments, shares, and ratings.  There exists the set of URIs messages/<messageId>/<attribute>/<attributeId>, e.g. message/1/comments/1 is a request for comment 1 attached to message 1. The <attribute> value is taken from a constant set of values in this example, specifically [comments,shares,ratings]

One issue I have is with the fact that many constructed examples assume that the set of attributeId values over messageId values are non-intersecting, i.e. you can effectively derive a messageId from an attributeId.  Even given the identities of messageId and attributeId start at a common origin, given that singular id is used across any set of attributes, it simply follows that the attribute and the messageId are derivable as a function of the attribute id, due to the fact the attribute id is globally unique. If every attribute id is globally unique, then by definition it is unique to a specific message id, therefore the fundamental system model posits that messageId = functionOf(attributeId).

Consider a bug where accidentally, some attribute was simply copied from one message to another.  Of course it was reinstantiated.  Of course there's a new attribute id.  You promise you never made a mistake anywhere in any implementation.  But what if you had ?  You're in a situation where the system is in fundamental disagreement with itself.  Say when everything was good, this was legal "/messages/42/comments/1600.  A bad thing happens and /messages/49/comments/1600 gets materialized.  What do the deeper systems do ?    

Obviously because of the fact the id's are primary keys, and that will fail out of the gate.  But as the complexity of the model grows, not everything is a primary key.  You will end up in this situation because your system is fundamentally capable of representing paradoxical information, which cannot be resolved.  This is the fundamental goal of simple normalization of information by the removal of redundant data.  Efficiency is not the only goal.  The real goal is to have a system that is incapable of holding multiple opinions about the same atomic fact.  This is the heart of transactional database design.  It is also the heart of any software design once it reaches a certain level of complexity.

My concern with this lies in what I consider to be a fundamental 'ility' of a system, just like scalability or extensability.  This is the measure of 'durability', or conversely 'fragility' of a system.  In the example given above it is possible that the system can end in an end state that is both wrong, and more importantly, illegal with respect to the metadata of the system.  In simplest terms, you've created a paradox, and the only way out of a paradox is a random function with respect to the set of paradoxical discriminants, or, pick what you like, look everything else up again, and deal somehow with the fact that a piece of information magically disappeared, and your system has effectively broken a fundamental rule whose effects will return to bite you. Oh, and you may have changed contexts in this process through a fundamentally unobservable change in metadata relations.

What you need to do is to define  key information with respect to the context within which it is scoped.  A messageId can be seen as evidently global, defining a specific message within the entire set of known messages.  An attribute id however, defines an attribute with respect to the message it is an attribute of.  It has no independent meaning beyond that fact.  If the message, and its attribute did not exist, neither would it. Therefore, it is better to consider the identity of attributes as being a distinct monotonically increasing set (1,2,3,....) for any given attribute.  

Parenthetically, discussions of whether or not messages are better defined within the context of a user are far more complex, and generates the most common answer of any practiced architect.  "It depends."

The foremost result of this scoping of identity to context is that it is no longer possible to create constructs that define paradoxical information, with respect to the systems information model.  This isn't to say you can't define illegal constructs, i.e. indexing an attribute that does not exist, rather, it's that you can't have proper syntax and base semantics (the /messages/*/comments/1600 example) that allows you to create a paradox in the higher levels of the system.  You can define something not within the system, but you can no longer define something paradoxical.

Other benefits accrue as well.  If you consider pagination of an attribute set such as comments, this process moves from an effectively random set of keys to an ordered set of keys, vastly simplifying any sequential database operations.  It effectively richens the system vocabulary, as in providing /messages/<messageId>/comments/1 as a mechanism to access the very first comment,where the query parameter extension /messages/<messageId>/comments/1?orderBy=votes would cause the REST service to return the most upvoted comment.  At the end of the day, the statement that /messages/1/comments/1 is a meaningful statement incapable of paradox is the fundamental benefit.  From that accrues the ability to infer (admittedly with code enforcement) that for any /messages/<message>/comments/1, this is a reference to the first comment with respect to a global default ordering of comments.

Summarizing, this is one thing that is missed too often in system architecture, the understanding that as models become more complex, it becomes easier and easier to introduce paradox into those models.  It is critical to consider identity with respect to the context within which that serves as an identity.  I cannot say that in an interview when handed the scenario and given an obvious expectation, however it rankles enough I finally had to put fingers to keyboard.  When considering the definition of identity, the context of that identity is critically important.  Nesting of identities that are not truly contained means that the system is fundamentally capable of expressing paradoxical data.  Any system so defined is just waiting for something really bad to happen.

In conclusion, I'd like to once again thank Douglas Hofstadter for GEB (https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach)  I read GEB early on in my career, and many times over since then.  Over my entire career, in every day, I agree more and more.  The interest, risk, and power of systems is directly related to their relations to the metadata that defines them for any given level and their capacity for handling legally expressible paradoxes.  My experience with modern software technology says that production systems and paradoxes are disaster that is either happening or will happen.
Wait while more posts are being loaded