Profile cover photo
Profile photo
Baptiste Pizzighini
Full-stack developper, software architect, business-aware hacker, but customer-centric evangelist above all.
Full-stack developper, software architect, business-aware hacker, but customer-centric evangelist above all.


Post has attachment
Windows is live on Git

[...] the Windows code base is approximately 3.5M files and, when checked in to a Git repo, results in a repo of about 300GB. Further, the Windows team is about 4,000 engineers and the engineering system produces 1,760 daily “lab builds” across 440 branches in addition to thousands of pull request validation builds [...]

Post has attachment
P got its start in Microsoft software development when it was used to ship the USB 3.0 drivers in Windows 8.1 and Windows Phone. These drivers handle one of the most important peripherals in the Windows ecosystem and run on hundreds of millions of devices today. P enabled the detection and debugging of hundreds of race conditions and Heisenbugs early on in the design of the drivers, and is now extensively used for driver development in Windows.

Early positive experience with P in the Windows kernel led to the development of P#, a framework that provides state machines and systematic testing via an extension to C#. In contrast to P, the approach in P# is minimal syntactic extension and maximal use of libraries to deliver modeling, specification and testing capabilities.

Post has attachment
Designed by Saint Petersburg-based engineers at JetBrains, a company that builds tools for developers, Kotlin was intended to improve on the shortcomings of Java (the dominant language for Android) while being completely interoperable with it, meaning you could switch to Kotlin mid-project without having to rewrite old Java code.

Post has attachment
A 10x programmer is, in the mythology of programming, a programmer that can do ten times the work of another normal programmer, where for normal programmer we can imagine one good at doing its work, but without the magical abilities of the 10x programmer.

Actually to better characterize the “normal programmer” it is better to say that it represents the one having the average programming output, among the programmers that are professionals in this discipline.

The programming community is extremely polarized about the existence or not of such a beast: who says there is no such a thing as the 10x programmer, who says it actually does not just exist, but there are even 100x programmers if you know where to look for.

Post has attachment
#Javascript doesn't have types, and doesn't enforce immutability, so the following things tend to happen in almost every project I've come across:

1/ The first version is (most of the time) clean and simple. Well chosen libraries being used with the best practices.

2/ As a project grows, big refactorings begin to become more and more "dangerous". There's always the risk of having a runtime error that was not caught in dev, and will be caught in production.

3/ New changes, then, become small refactorings, mostly thin layers of code over previous code. A lot of null / undefined testing takes place, unit tests are corrected, and new ones are written.

These thin layers of code end up adding bits of complexity to the code. At first it's manageable, but, months later, the project starts getting more and more difficult to change. The worst effect on these projects is losing reliability and safety: we are not sure the application does what we want it to do, and we are not sure if any hidden bugs will make it to production.

Post has attachment
Early on, #Rust had a “green threading” model, not unlike #Go’s. [...]

The problem is that green threads were at odds with Rust’s ambitions to be a true C replacement, with no imposed runtime system or FFI costs: we were unable to find an implementation strategy that didn’t impose serious global costs. [...]

So if we want to handle a large number of simultaneous connections, many of which are waiting for I/O, but we want to keep the number of OS threads to a minimum, what else can we do?
Asynchronous I/O is the answer – and in fact, it’s used to implement green threading as well. [...]

The problem is that there’s a lot of painful work tracking all of the I/O events you’re interested in, and dispatching those to the right callbacks (not to mention programming in a purely callback-driven way). [....]

That’s one of the key problems that futures solve.

Post has attachment
In many ways ClojureScript has and continues to be ahead of the JavaScript mainstream with respect to best practices. Concepts which are only starting to break into the mainstream such as immutability, single atom application state, and agile UI development via robust hot-code reloading are old news to ClojureScript users. And thanks to the under appreciated Google Closure compiler, ClojureScript offers features like dead code elimination and precise code splitting that popular JavaScript tooling is unlikely to achieve anytime in the near future.

Post has attachment
If you've not heard of the Closure Compiler, it's a JavaScript optimizer, transpiler and type checker, which compiles your code into a high-performance, minified version. Nearly every web frontend at Google uses it to serve the smallest, fastest code possible.

Post has attachment
If you want to target a broad consumer audience, it’s safest to assume that users’ skills are those specified for level 1. (But, remember that 14% of adults have even poorer skills, even disregarding the many who can’t use a computer at all.)

To recap, level 1 skills are:
- Little or no navigation required to access the information or commands required to solve the problem
- Few steps and a minimal number of operators
- Problem resolution requiring the respondent to apply explicit criteria only (no implicit criteria)
- Few monitoring demands (e.g., having to check one’s progress toward the goal)
- Identifying content and operators done through simple match (no transformation or inferences needed)
- No need to contrast or integrate information

Anything more complicated, and your design can only be used by people with skills at level 2 or 3, meaning that you’re down to serving 31% of the population in the United States, 35% in Japan and the UK, 37% in Canada and Singapore, and 38% in Northern Europe and Australia. Again, the international variations don’t matter much relative to the big-picture conclusion: keep it extremely simple, or two thirds of the population can’t use your design.

Post has attachment
"Capsule summary of this thread: Justin Schuh finally comes out and broadcasts what security people have been telling each other for years, which is that antivirus software is one of the biggest impediments to hardening software.
I think by 2016 most people understand that antivirus doesn't work, and that it's installed more as a compliance and IT management check box than anything else.
I think people who have paid attention for the last couple years as Tavis Ormandy has published AV bug after AV bug have a good sense for the low software quality of AV systems, and understand how it creates new vulnerabilities on systems.
What I don't think we've seen is someone explaining that not only is AV ineffectual and unreliable, but that by dint of being installed across hundreds of thousands of machines and because of the kernel and runtime grubbing that AV requires as part of its security theater, AV makes it much harder to deploy OS and runtime countermeasures to attacks.
There was a Project Zero post on HN yesterday (someone can find it) about how Chrome wanted to do full Win32 syscall filtering for its sandbox processes, but couldn't, because Adobe and Widevine relied on some Win32 syscalls --- and so DRM not only doesn't work, and creates new vulnerabilities (cough Flash cough) but also makes it harder for Chrome to implement new security features that can eliminate whole classes of bugs.
Same deal with AV."
Wait while more posts are being loaded