In the big data era traditional data management and analytic systems are no more adequate to efficiently and effectively analyzed large amount of (internet-related) data. Hence, novel data models, programming paradigms and database management systems are needed.
The course addresses the challenges arising in the Big Data era, examining in depth big data processing and knowledge extraction for big data. Specifically, the course covers how to collect, store, retrieve, and analyze big data to mine useful knowledge for internet applications. The course covers not only data analytics aspects but also novel programming paradigms (e.g., MapReduce, Spark RDD-based programs) and discusses how they can be exploited to support engineers to extract knowledge from data. Practical examples of big data techniques for data science applied to internet domain will be presented.
Traditional data management and analytic systems are no longer adequate in the big data era. Hence, to manage and fruitfully exploit the vast amount of (internet-related) available heterogeneous data, novel data models, programming paradigms, information systems, and network architectures are needed.
The course addresses the challenges arising in the Big Data era. Specifically, the course will cover collecting, storing, retrieving, and analyzing big data to mine helpful knowledge and insightful hints. The course covers data modeling, analytics, and novel programming paradigms (e.g., MapReduce, Spark RDDs). It discusses how they can be exploited to support big data scientists to extract insights from data.
The course aims at providing:
• Knowledge of the main technological characteristics of the infrastructures and distributed frameworks used to deal with big data (e.g., Hadoop and Spark).
• Ability to write distributed programs to process and analyze big data by means of big data frameworks (Spark RDD- and DataFrame-based programming).
• Ability to implement scalable data analytics processes, based on data mining and machine learning algorithms, for internet applications (e.g., network traffic data analysis).
• Knowledge of the (relational and non-relational) databases systems that are used to store and query big data.
The course aims at providing:
• Knowledge of the main problems and opportunities arising in the big data context and technological characteristics of the infrastructures and distributed systems used to deal with big data (e.g., Hadoop and Spark).
• Ability to write distributed programs to process and analyze big data using programming paradigms based on Map Reduce and Spark (Spark RDD- and DataFrame-based programming).
• Knowledge of the (relational and non-relational) database systems that are used to store big data
• Ability to implement scalable data analytics processes based on data mining and machine learning algorithms for big data applications.
Basic object-oriented programming skills
Knowledge of the Python programming language
Python language, basic knowledge of Java language, and basic knowledge of traditional database concepts (relational model and SQL language).
Lectures (45 hours)
• Introduction to Big data: characteristics, problems, opportunities (3 hours)
• Hadoop and its ecosystem: infrastructure and basic components (3 hours)
• Apache Spark Architecture (3 hours)
• Spark RDD- and dataset-based programming paradigm (16.5 hours)
• Streaming data analysis: Spark Streaming (3 hours)
• Data mining and Machine learning libraries: Spark MLlib (4.5 hours)
• Graph analytics: Spark GraphX and GraphFrame (4.5 hours)
• Databases for Big data: data models, design, and querying (e.g., HBase and MongoDB) (4.5 hours)
• Introduction to network traffic data analytics (3 hours)
Laboratory activities (15 hours)
• Developing of applications for big data analytics based on Spark (15 hours)
Lectures (45 hours)
• Introduction to Big data: characteristics, problems, opportunities (3 hours)
• Hadoop and its ecosystem: infrastructure and essential components (3 hours)
• Map Reduce programming paradigm (9 hours)
• Spark: Spark Architecture, RDD-based and Spark SQL-based programming (16 hours)
• Streaming data analysis: Spark Streaming (6 hours)
• Data mining and Machine learning libraries: Spark MLlib (6 hours)
Laboratory activities (15 hours)
• Developing of applications by means of Hadoop and Spark (15 hours)
The course consists of Lectures (45 hours) and Laboratory sessions (15 hours). The laboratory sessions are focused on the main topics of the course (Apache Spark, MLlib, NoSQL databases) (15 hours). The Laboratory sessions allow experimental activities on the most widespread big data frameworks.
The course consists of Lectures (45 hours) and Laboratory sessions (15 hours).
The laboratory sessions are focused on the course's main topics (Map Reduce, Spark, and MLlib). The Laboratory sessions allow experimental activities on the most widespread open-source products.
Copies of the slides used during the lectures, examples of exercises, and manuals for the activities in the laboratory will be made available. All teaching material is downloadable from the course website or the Teaching Portal.
Reference books:
• Matei Zaharia, Bill Chambers. Spark: The Definitive Guide (Big Data Processing Made Simple). O'Reilly Media, 2018.
• Advanced Analytics and Real-Time Data Processing in Apache Spark. Packt Publishing, 2018.
• Tom White. Hadoop, The Definitive Guide. (Third edition). O'Reilly Media, 2015.
Copies of the slides used during the lectures, examples of written exams and exercises, and manuals for the activities in the laboratory will be made available. All teaching material is downloadable from the course website or the Portal.
Reference books:
• Matei Zaharia, Bill Chambers. Spark: The Definitive Guide (Big Data Processing Made Simple). O'Reilly Media, 2018.
• Advanced Analytics and Real-Time Data Processing in Apache Spark. Packt Publishing, 2018.
• Tom White. Hadoop, The Definitive Guide. (Third edition). O'Reilly Media, 2015.
• Matei Zaharia, Holden Karau, Andy Konwinski, Patrick Wendell. Learning Spark (Lightning-Fast Big Data Analytics). O’Reilly, 2015.
Slides; Libro di testo; Esercizi; Esercizi risolti; Esercitazioni di laboratorio; Esercitazioni di laboratorio risolte; Video lezioni dell’anno corrente;
Lecture slides; Text book; Exercises; Exercise with solutions ; Lab exercises; Lab exercises with solutions; Video lectures (current year);
Modalità di esame: Prova scritta in aula tramite PC con l'utilizzo della piattaforma di ateneo;
Exam: Computer-based written test in class using POLITO platform;
...
Exam: Written exam; Individual essay
The exam aims at assessing (i) the ability of the students to write distributed programs to process and analyze big data by means of novel programming paradigms (the Spark RDD- and dataset-based programming paradigm) and frameworks and (ii) the knowledge of the students of the main concepts related to the big data topic and the technological infrastructures and distributed systems, including scalable relational and non-relational databases systems, that are used to deal with big data.
The exam includes two mandatory parts. The two mandatory parts are (i) a written exam and (ii) the evaluation of an individual report on the practices assigned during the course.
PART I - WRITTEN EXAM
The written exam lasts 2 hours and it is composed of two subparts:
- 2 programming exercises (Spark RDD- and DataFrame-based programming) , structured as open questions (27 points)
- 2 multiple choice questions on all the topics addressed during the course (4 points)
The programming exercises aim at evaluating the ability of the students to write distributed programs to analyze big data by means of the novel programming paradigms that are introduced in the course.
The multiple choice questions are used to evaluate the knowledge of the theoretical concepts of the course and in particular the knowledge of the characteristics of the main technological infrastructures and distributed systems (Hadoop and Spark), including scalable relational and non-relational databases systems, that are used to deal with big data.
The evaluation of the programming exercises is based on the correctness and efficiency of the proposed solutions.
For each multiple choice question, the students achieve two points if the answer is correct and zero points if the answer is wrong or missing.
The written exam is closed book.
- Books, notes, and any other paper material are not allowed.
- Electronic devices of any kind (PC, laptop mobile phone, calculators, etc.), apart from the PC used to take the test, are not allowed.
The maximum grade for the written exam is 31.
PART II - INDIVIDUAL REPORT
The second part of the exam consists in preparing an individual report on the practices assigned during the course and developed in laboratories.
The report aims at evaluating the ability of the students to implement data analytics processes for analyzing big data.
The evaluation of the report is based on the clarity of the report and on technical correctness and efficiency of the proposed and implemented solutions.
The maximum grade for the individual report is 31.
FINAL GRADE
The exam is passed if (i) the grade of the written exam is greater than or equal to 18 points and (ii) the grade of the individual report is greater than or equal to 18 points.
The final grade is a weighted average between the evaluations of the written exam (80%) and the individual report (20%). Specifically, the final grade is given by the following weighted average: grade of the written exam*0.8 + grade of the report*0.2
Gli studenti e le studentesse con disabilità o con Disturbi Specifici di Apprendimento (DSA), oltre alla segnalazione tramite procedura informatizzata, sono invitati a comunicare anche direttamente al/la docente titolare dell'insegnamento, con un preavviso non inferiore ad una settimana dall'avvio della sessione d'esame, gli strumenti compensativi concordati con l'Unità Special Needs, al fine di permettere al/la docente la declinazione più idonea in riferimento alla specifica tipologia di esame.
Exam: Computer-based written test in class using POLITO platform;
The exam aims at assessing (i) the ability of the students to write distributed programs to process and analyze big data by means of novel programming paradigms and frameworks (the Map Reduce programming paradigm and the Spark RDD-based programming paradigm) and (ii) the knowledge of the students of the main issues related to the big data topic and the technological infrastructures and distributed systems, including scalable relational and non-relational databases systems, that are used to deal with big data.
The exam consists of a written onsite test (with Exam+Lockdown platforms on the student notebook) that lasts 1.5 hours.
Specifically, the written onsite test is composed of two parts:
- 1-3 programming exercises (structured as open questions) based on MapReduce- and Spark-based programming to be solved using the Java language (max 27 points)
- 1-3 multiple choice questions on all the topics addressed during the course (max 6 points).
The programming exercises aim to evaluate the ability of the students to write distributed programs to analyze big data through the programming paradigms introduced in the course.
The multiple-choice questions are used to evaluate the knowledge of the theoretical concepts of the course and, in particular, the knowledge of the characteristics of the main technological infrastructures and distributed systems (Hadoop and Spark), including scalable relational and non-relational database systems that are used to deal with big data.
The evaluation of the programming exercises is based on the correctness and efficiency of the proposed solutions.
The exam is closed book.
- Books, notes, and any other paper material are not allowed.
- Electronic devices of any kind (PC, laptop mobile phone, calculators, etc.), apart from the PC used to take the test, are not allowed.
The exam is passed if the mark of the written exam is greater than or equal to 18 points.
In addition to the message sent by the online system, students with disabilities or Specific Learning Disorders (SLD) are invited to directly inform the professor in charge of the course about the special arrangements for the exam that have been agreed with the Special Needs Unit. The professor has to be informed at least one week before the beginning of the examination session in order to provide students with the most suitable arrangements for each specific type of exam.