Post has attachment

Post has attachment

Post has attachment
How is it about your knowledge of open source licensing?
If you want to test it today, our OSADL Quick License Compliance Check is available.

Post has attachment
Windows Subsystem for Linux exits beta, will become fully supported in Fall Creators Update

Post has attachment
Dear OSADL member,

As you already know, the 18th Real Time Linux Workshop (RTLWS) will be
held in Prague, Czech Republic from October 19 to 20, 2017 and will be
followed by the 7th Real Time Linux Summit on October 21, 2017.

There still are some slots available at the RTLWS to present your latest
and greatest news with respect to real-time of any type running under or
in any relation to Linux. It would be great if you could take the
opportunity and compose an abstract where your write what you did in
this field and share your experiences with us. We are particularly
interested in case-studies, success & failures and lessons learned.

Please select one of the two presentation forms:

1. full 60-minute presentation and paper in the proceedings, or
2. work-in-progress 30-minute presentation without a paper.

To give you some more time, the RTLWS abstract submission deadline has
been extended to July 14, 2017.

Below please find the links with regard to the RTLWS/RT-Summit:

General information: -> https://www.osadl.org/RTLWS

Abstract submission for the Real Time Linux Workshop (RTLWS) ->
https://www.osadl.org/RTLWS-Submission

Details of the call for presentations of the Real Time Linux Summit ->
https://wiki.linuxfoundation.org/realtime/events/rt-summit2017/cfp

The formal registration for both the Real Time Linux Workshop and the
Real Time Linux Summit is handled by the Linux Foundation. Please use
the registration link below for registering.

Registration information for both events: ->
https://wiki.linuxfoundation.org/realtime/events/rtlws2017

Please contact us anytime, if you have further questions.

Looking forward to welcoming you in Prague.

Best regards,
Your OSADL team

Post has shared content
If you happen to be at +Open Source Automation Development Lab (OSADL) eG​​ Networking Day tomorrow, don't miss +Enrico Jörns​​' talk about updating Embedded Linux devices in the field with RAUC (http://rauc.io)!
Photo

Post has shared content
OSADL Networking Day, Part I

Unfortunately, there is no video live stream for the OSADL networking day, so here some conclusions from today' talks in Heidelberg.

The day started with Armijn Hemel introducing the topic of IoT security: internet connected salt shakers, dish washing machines etc give a whole
new potental of devices which might be sensitive to many of the security issues we have seen in IT business during the last decades.

Dr. Lutz Jänicke (Phoenix Contact Group) then started his talk about vulnerability management. As 100% security is impossible, one needs to
find strategies for finding vulnerabilities and taking care of it for your products. His argument is that we need an institution who collects vulerabilities, which is where CERT@VDE comes into the game. This institution shall collect information about vulnerabilities, talk with the manufacturers about solutions and take care of the publication. CERT@VDE is specially focussed on the automation market. As a non profit organisation (well, not entirely, you need to be a member to make use of their services), VDE seems to be in a good position to provide this kind of service to european customers. The CERT has offers to component manufacturers, machine integrators and asset owners: they want to be a coordinator for the players, have a central knowledge base, provide fast responses especiall for small and medium enterprises. More information
may be available on https://cert.vde.com. My conclusion: It's good to see that the industry is starting to take some action; let's see what it
means in practise and if they really have an interesting offer for real-world problems. Currently, the website looks more like an aggregator of existing vulnerability databases and focusses on
automation devices, not software components.

Philippe Ombredanne, nexB Inc. and AboutCode.org, then talked about how they help customers finding our which software components are included in their products. According to his experience, some techniques such as JavaScript or Docker containers make it very easy to ship a lot of software without knowing what's inside. They developed tools to scan a lot of software, fingerprint components and find out which packages the firmware consists of. My conclusion: They do a lot of tracing in the
live system and other voodoo, which simply isn't necessary if you have full control over your build process, which might be a good idea anyway
if you want to be able in control over your system in case something goes wrong. So I'm not too excided.

Philipp Michel from Wind River then talked about "Securing Linux systems in the Internet of Things". McAffee analyzed the market potential and
estimates 8 billion IP connected devices in 2019. It was outlined that the CVE database turned out as the de-facto standard for a unified database for vulnerabilities. For example, in 2016, there are more than 1000 critical vulnerabilities in the CVE database and about 18 CVEs per day. So the first thing that's necessary is to constantly monitore
vulnerabilities in the CVE database, on mailing lists etc (optimally with tools, not manually), then for example analyze and classify relevance for certain customer applications and finally notify customers
proactively and suggest necessary steps. Wind River publishes a bi-monthly bulletin to their customers which collect their findings, and
push out patches once per month or once per three-months, depending on the customer's service level. For scanning, they use Nessus agains their Linux distributions and find out which versions are related to which CVEs. To handle all about 2000 CVEs per year, they have a team of about 4-5 engineers. My conclusion: interesting talk, they identified challenges right and seem to have drawn the right conclusions. It's just a pitty that what they are doing is part of a closed product, not part of a broader community effort.

The last talk of the first session by Dimitri Philippe (BE.services) was about "Embedded security shield - Cybersecurity of Linux systems based
on Kaspersky Lab's technology". The good news is that the industry seems to establish norms (i.e. IEC 62443) about IT security, which means that
manufacturers need to take care of the topic. Any security assessment starts with analysing what's inside a device, then define a Threat Model and start integrating a secure system based on the findings. One of the important aspects is to take care of proper isolation of the services, so if an attacker comes in, the effects remain as located as possible. BE works with Kaspersky tools, i.e. the Kaspersky Security System which analyzes how processes communicate with each other and
check that against known profiles. For example, there is a main communication channel in CoDeSys which is used to connect a lot of
different external things to the PLC core; they use KSS to supervise this communication. My conclusion: interesting talk, but I'm not sure
why we need proprietary products for this.
PhotoPhotoPhoto
31.05.17
3 Photos - View album

Post has shared content
OSADL Networking Day, Part III

After the lunch break, Julia Lawall talked about "The Linux kernel as an object oriented system" and how to make sure the kernel data structures aren't corrupted. A central mechanism is to make structures "const" on the C level. While about 20k data structures are already protected and people are constantly improve constification, the kernel unfortunately still grows on the same scale. So kernel developers now think about how to add helper infrastructure based on gcc and coccinelle. Some coccinelle semantic patches have already been written that find code which could be constified. Once her semantic patch was finished, it was immediately able to convert 2378 data structures into const, which is quite a success. However, there are still about 30k data structures which need a deeper look, so the effort won't be finished immediately.  On the other hand, there are already some ideas how to further improve the situation.

In the next talk, Tim Hemel (securesoftware.nl) talked about a "Making Security visible". Being a security consultant who helps customers to write better software, he asked himself: "Why do we still see 30 years old security issues in today's software?" He found, as it is still very rare that people die by software bugs, the industry is still focussing on features. One way out is to play with incentives, i.e. put a sticker "this system is secure" onto hardware and define that the system is now secure :-) He thought about more realistic ways to make the security status of a given system visible. A good start is to analyze a system and find out security requirements (threats, how to protect), then look at the stages from requirements over architectore to implementation and find out where things can go wrong. It is important not to do this alone, because it needs a dialog between the business side, the legal side and the technical side of a project. A tool to make this possible is a "STRIDE requirements analysis", which offers a systematic set of rules which should be taken care of for each system. He found that by doding this analysis, it is easy to transform it into a set of test cases. In parallel, a set of secure conding standards for the target application language is necessary. Finally, there are many tools who make security visible. Tim recommends the FSS framework which is open source, and to have a look at the Linux Foundation's core infrastructure initiative.

Till Jaeger (JBB Rechtsanwälte) talked about "Security holes and product liability" and the questions which (security) damages a vendor is liable of. He outlined that, according to legal understanding, software is a product, and a defect caused by a security incident might result in health damage, or for example loss of data. Under German liability law one needs to find out if someone violated his duties. Defects might result from issues with construction, production, instruction or observation, and the manufacturer needs to take care of all of those areas. If for example a security issue is published, manufacturers need to react and for example inform their customers of the actions taken.  The cruical question is: which level of security is required? That's quite a difficult question, as it involves technical and economical questions. However, a certain minimal level of security can always be considered. It is important to understand that the product liability act is only valid for consumer products, and it cannot be restricted i.e. by special contracts. On the other hand, general product liability allow contractual limitations and also cover damages like i.e. loss of data.  Under German law, you need to prove that there is a damage. My conclusion: the talk clearly outlines that, even without the governments establishing stronger rules for IoT devices, even today manufacturers are reliable for their devices and the damage they create.

In the final talk of this session, Philippe Ombredanne spoke about "ScanCode: A fresh open source take on open source software license and origin scan". He run a proprietary scanner and ScanCode over the debian code base and looked at the places where the tools disagreed. He looked into the tricks and techniques used by the scanner and gave some awesome insights into the weird world of those scanning tools. My conclusion: like always, proper packaging tools do already take care of license identification, so of course packages with those are handled easily by the scanners. ScanCode evolves, so it might be a good idea to follow it's development.


PhotoPhotoPhotoPhoto
31.05.17
4 Photos - View album

Post has shared content
OSADL Networking Day, Part II

In the 2nd session, Jan Altenberg talked about why you should care about security. He repeated customer arguments he has heared of, such as: "my device is running in a secure network" - that didn't work i.e. in the UK hospitals. Also, "there is no interest in attacking your device" is no valid argument, because devices can be abused for botnets, no matter what device it is. Some people even assume that their software is "100% bug free", because, oh, well... And, last but not least, if manufacturers don't care about security, they will almost certainly be forced to care by the governments in the future. There are several easy steps to secure devices: don't run things as root if not necessary, use file system permissions, remove unnecessary services and make use of capabilities, plus several more higher level mechanisms in Linux. If you use distributions like Debian, they already take care of serveral of those issues. An important aspect is that security needs to be taken of in the beginning of a project, not in the end (because then everyone runs out of time). Finally, if companies need help with these things, support companies like his are there to help.

Next, Patrick Ohly (Intel) talked about "Four system update mechanisms except RAUC, with and without integrity protection", with a focus on integrity protection of the filesystem. Patrick works on the Yocto project, so building root filesystems before shipping is one of his tasks. Often, devices are located in areas where they can't be physically updated by a technican, so updating over the network and making sure only the intended software runs on the device is essential.  Patrick looked at swupd, OSTree (used by AGL), swupdate, Mender.io and RAUC, but in his talk, he wanted to focus on the underlying mechanisms, not about particular solutions. One of the key criteria is if an update system is block based or file based: some integrity mechanisms work on block level, some on file level, and in order to activate updated content, block based mechanisms need to be rebooted, but keep the image together and improve testability. On the other hand, file based systems make live updates much faster (but maybe also more complicated). Wearing out flash based storage media is also an aspect which needs to be taken care of. Another aspect is that some update systems make assumptions about which bootloader to use, which update server to support and which provisioning service to use. Several systems seem to agree on hawkbit for provisioning. The kernel has mechanisms for integrity measurement and enforcement (IMA/EVM), but their finding is that i.e. SQLite has issues with it (it keeps files open, so the hashes are never recreated), and directories are not protected. However, the upstream development for IMA/EVM is slow, but there are block based alternatives like DM-Verity, coming from ChromeOS. Finally, Intel now has an IoT demo platform based on Yocto, where they demonstrate the whole setup.

In the last talk before lunch, my colleague +Enrico Jörns talked about the RAUC (Robust Auto Update Controller) framework. While customers might disagree, the most important reason for updating is deploying security updates and bugfixes, not features. Updating should be as robust as possible; unattended updates should not brick your device. In addition, unauthorized modification should be avoided. Often people start with a shell script (well, there is never enough time to develop an update system, right?), but over the time it turned out that this also often misses a lot of important corner cases regarding NAND handling, sudden power loss, out-of-memory situations etc. An updating concept always starts with a controlled environment (i.e. Yocto, PTXdist, Buildroot) and a lot of (mostly automated) testing of the generated root filesystem. Then you need to verify identity, both of the device (is it the right image for it?) and of the update service (is this authorized to update this device?). In order to achieve atomicity, RAUC makes use of redundancy. A+B scenarios have the advantage that it is really robust (you can fallback if something goes wrong), but needs enough space for two systems. One of the design criteria for RAUC was that it is designed as a framework, so you can use it with many different bootloaders (Barebox, U-Boot, Grub), media (USB stick, NAND, eMMC, ...). RAUC contains an update daemon that runs on the device under Linux, plus a D-Bus connected command line tool to talk to RAUC. Updates are put into bundles (compressed and mountable squashfs) which are signed with X.509 signatures and can basically contain anything. Bundles contain things to put into slots (i.e. rootfs, app-fs, bootloader).  Enrico outlined that RAUC also supports different integrity mechanisms (IMA/EVM, DM-Verity), even those where files are re-hashed with a key which is only available on the target. Finally, RAUC can be integrated with the Hawkbit deployment server. For integration, there is meta-rauc for Yocto, and it is also integrated in PTXdist mainline.


PhotoPhotoPhotoPhoto
31.05.17
4 Photos - View album

Post has shared content
Today: OSADL Networking Day conference in Heidelberg. One of the main topics is software updating, so the day promises to be interesting!
PhotoPhotoPhotoPhoto
31.05.17
4 Photos - View album
Wait while more posts are being loaded