Profile cover photo
Profile photo
CISTER Research Centre
11 followers
11 followers
About
CISTER Research Centre's posts

Post has attachment
Photo

Post has attachment
Photo

Post has attachment
The whole point of a proof is to convey a reasoning to a reader. To this end achievement, in most of the scientific papers that we write every day, we use arguments; we discuss prepositions, i.e., statements/claims that must be either true or false and we link prepositions together in order to reach particular and convincing conclusions. In this talk we are going to discuss guidelines for good mathematical writing by revisiting how to write a proof --- What it is? How to begin? What are the requirements? --- are few questions to which we will provide an answer.

Post has attachment
Added photos to CISTER Periodic Seminar Series - Damien Masson.

Post has attachment
This talk presents FKP, a scheduling algorithm for hard real-time sporadic tasks that takes its decision in constant time, and that outperforms Fixed Priority based schedulers in terms of schedulability. FKP does not make any kind of assumption on the deadline/period relation. FKP also has the property to significantly reduce the number of preemptions a job suffers, compared both with fixed Priority based schedulers and with EDF. Moreover, an exact schedulability test based on exact Worst Case Response Times computations is presented. This is a joint work with Geoffrey Nelissen.

Post has attachment
Added photos to CISTER Periodic Seminar Series - Per Lindgren and David Pereira.

Post has attachment
The IEC 61499 standard proposes an event driven execution model for distributed control applications for which an informal execution semantics is provided. Consequently, run-time implementations are not rigorously described and therefore their behavior relies on the interpretation made by the tool provider. In this paper, as a step towards a formal semantics, we focus on the Execution Control Chart semantics, which is fundamental to the dynamic behavior of Basic Function Block elements. In particular we develop a well-formedness criterion that ensures a finite number of Execution Control Chart transitions for each triggering event. We also describe the first step towards the mechanization of the the well-formedness checking algorithm in the Coq proof-assistant so that, ultimately, we are able to show, once and for all, that this algorithm is effectively correct with respect to our proposed execution semantics. The algorithm is extractable from the mechanization in a correct-by-construction way, and can be directly incorporated in  certified toolchain for analysis, compilation and execution of IEC 61499 models. As a proof of concept a prototype tool RTFM-4FUN, has been developed. It performs well-formedness checks on Basic Function Blocks using the extracted algorithm's code.

Post has attachment
The Maxwell’s demon paradox was proposed in 1867 as a challenge to the second law of thermodynamics. A sudden and unexplained increase of temperature was achieved (theoretically) by means of a simple information measurement process of the molecules of a gas in a container. The debate of the “demon” problem regained attention a century later by providing the link between classical physics (thermodynamics) and the growing fields of information theory and quantum computers. One main result can be derived from the study of the Maxwell’s demon problem: information erasure is the fundamental thermodynamic irreversible process that compensates the unbalance of entropy created by the operation of the “demon”. Network diversity multiple access (NDMA) is a scheme for contention resolution in wireless networks that is based on this principle: avoid as much as possible a ”dumb” erasure or discarding of information. In NDMA, signals with collision are not discarded as in conventional random access protocols. Instead, they are stored in memory for further processing. Additional retransmissions are induced and collected by the system to create an adaptive full-rank multiple-input multiple-output (MIMO) system from which contending signals can be conveniently recovered. NDMA is the algorithm with the highest potential throughput in the literature and it also shows a superior energetic efficiency than ALOHA-type solutions. The energy consumed by the “induced” retransmissions is proved to be balanced out by the reduction of back-off retransmissions for error control. This presentation focuses on the recent developments for this family of algorithms, the existing issues being tackled and future work. The use of successive interference cancellation (SIC) has allowed the algorithm to surpass the barrier of M packets per time slot, where M is the number of antennas at the receiver. This is so far the largest throughput achieved by any random access protocol, with promising applications in wireless local area networks (WLANs) and mainly in dense wireless sensor and actuator networks (DWSANs). Further improvements of this algorithm are envisioned using more thermodynamic principles, such as availability of channel and queuing state information. This scenario is typical in future networks with cognitive, self-organized and software-defined radios. NDMA and its extensions using blind separation, interference alignment and stochastic geometry constitute serious candidates for the next generation of multiple access and semi-centralized resource allocation systems.

Post has attachment
ABSTRACT:
In recent years there has been an increase in the use of embedded systems, and many of these systems are spatially separated, performing distributed communications in order to meet the real-time requirements of their applications. Many of the applications have very strict time requirements and demand a high level of determinism with regard to the instants when their tasks are executed. The Flexible Time Triggered over Switched Ethernet (FTT-SE) protocol is based on the Flexible Time Triggered (FTT) paradigm, following a master/slave schema where the master node manages the traffic exchanged by the slave ones according to the scheduling algorithm chosen by the master node (e.g., Rate Monotonic, Earliest Deadline First, etc.). The FTT-SE protocol allows the transmission of both types of traffic synchronous and asynchronous (real-time and best-effort) over Switched Ethernet network. Communications within an FTT-SE network are done based on the reservation of fixed duration time slots called Elementary Cycles (ECs), wherein each EC is organized into windows in order to guarantee bandwidth for the transmission of different types of traffic. 
The FTT-SE module is implemented as an application layer of ns-3. Currently, the module allows for concurrent synchronous, asynchronous and best-effort communications. The ECs are divided into sub-windows that are dedicated to the transmission of the different types of traffic. The types of traffic in the FTT-SE protocol are: (i) synchronous traffic (with priority 1, the highest); and (ii) asynchronous traffic, the asynchronous traffic is has three different types of traffic: (a) hard real-time (with priority 2); (b) soft real-time (with priority 3); and (c) best-effort (with priority 4, the lowest). All types of traffics can be transmitted in a single EC which is built by the master node. Furthermore, the percentage of the EC dedicated to the synchronous or asynchronous transmission is tunable and can be decided based on the application needs.

Post has attachment
Added photos to CISTER Periodic Seminar Series - Tiago Cerqueira.
Wait while more posts are being loaded