The research master is composed of two parts, the common course and the specialized courses. Common courses are courses about research in general. There is courses about writing scientific papers, scientific methodology and some conferences about research in France and in the world. Moreover, in order to validate our internship, we need to write bibliographic study on our internship subject before it starts. This bibliographic study will be defended during the master's colloquim.
The METH(methodology) course is composed of two part. The first part teach us the good practice in the developement of software like versionning, continuous integration and deployment with Docker. The second part of this course is an intensive week on how to assess a research hypothesis by valid and reproducible experiments.
The RAS(Rédaction d'articles scientifiques / Writing of scientific publications) teach us how the world of research communicate and validate results of your work.
The CONF(conference) course is a cycle of presentations, which provide an overview of other aspects of academic and industrial research.
The BIBL(Bibliography) course is a part of our internship work. From mid-october to later january we had to read a lot of articles related to our internship subject and produce a report of our reading. My subject was "Self-adaptable virtual machines", the goal of this intership was to design a new pattern for language interpretation with an explicit feedback loop inspired from Dynamic Adaptative Systems(DAS).
The colloquium is an introduction exercise to oral academic presentations. The presentations offer an overview of current research on the subjects of the different internships (oral equivalent of BIBL).
DSL - Domain-Specific Languages
This course starts with an introduction to MDE and SLE, and then moves into a deeper discussion of how to express the knowledge of particular domains into tool supported DSLs. The second part lets students investigate the applications of MDE and SLE to different types of software systems, from different starting points (language, business knowledge, standard, etc.) and for different software engineering activities such as Requirement Engineering, Variability Management, Analysis, Design, Implementation, and Validation & Verification.
SEM - Semantics
The objective of this course is to provide the elements to be able to rigorously demonstrate properties related to programming languages and programs written in these languages. Formal semantics makes it possible to describe unambiguously the expected behaviour of a program. For example, it is possible to establish the correction of a compiler by proving that the source program and the compiled program show the same observable behavior with respect to a given semantics. The course study different forms of semantics for different characteristics of programming languages.
OPC - Optimizing and Parallelizing Compilers
The objective of the course is on the one hand to establish a state of the art of optimization techniques used in current compilers (gcc, icc, llvm), but also to address the main research issues in this field. The first part of the course will focus on the various factors that influence application performance and the methods and tools that can be used to detect a problem. We will then discuss the optimization techniques implemented by a compiler to improve these performances. In the second part of the module, the course will focus on loop optimization and parallelization techniques based on polyhedral compilation. This section will be divided into two parts, first of all a presentation of the main loop transformations and their impact on performance, for which we will then show how they can be automated in a compiler.
BSI - Big-data Storage and Infrastructure
The purpose of this module is to provide an introduction to massive data management (Big Data) and data science (Data Science): main concepts, challenges, application areas, presentation of the main systems representing the state of the art, etc.). It introduces the main data storage and processing models, including the MapReduce model and its derivatives, as well as the main existing technologies, such as Hadoop, Spark, Flink, etc. Detailed descriptions and comparative analyses of these systems will be proposed, in order to allow an understanding of the underlying objectives, possible application areas and architectural choices. We will study the techniques widely used for the distribution of massive data processing (gossip, streaming, video,...). In particular, we will detail multicast/video streaming protocols as well as decentralized systems for data aggregation.
MAD - Models and Algorithm for Distributed systems
This module aims to lay the foundations for the distributed systems and algorithms. It develops the central notion of competition (or parallelism) by showing how it impacts the design and programming of these systems, depriving them of a notion of global time. The emphasis is on the representation of the executions of such systems as partial orders of events. It shows how to model and verify such systems, and how to develop programming primitives to ensure global properties, resistant to asynchronism and failure.
DMV - Data Mining and Visualization
The objective of this course is to introduce students to the concepts of exploratory data mining. This discipline is based on data where potentially useful knowledge is present, but it is not clear what to look for in the first place. This is a very common situation in science and industry. The techniques presented are pattern mining (discovery of different forms of regularity in the data), declarative data mining (simplified method to define and refine what we are looking for) and interactive data mining (co-construction of a solution between the algorithm and the user). In addition, the course introduces basic notions of information visualization, which are essential to present the results to an analyst and help him/her interact with the system.