Open access peer-reviewed chapter - ONLINE FIRST

Integrating AI into VET: Insights from AIM@VET’s First Training Activity

Written By

Žiga Emeršič, Peter Peer, Gregor Hrastnik, Nataša Meh Peer, José María Bey, María Meizoso-García, António Pedro Silva, Cláudia Domingues, Carla Abreu, António Costa, Dalila Durães, Paulo Novais, Cristina Renda and Abraham Prieto

Submitted: 20 February 2024 Reviewed: 01 March 2024 Published: 06 May 2024

DOI: 10.5772/intechopen.1004949

Artificial Intelligence for Quality Education IntechOpen
Artificial Intelligence for Quality Education Edited by Seifedine Kadry

From the Edited Volume

Artificial Intelligence for Quality Education [Working Title]

Dr. Seifedine Kadry

Chapter metrics overview

6 Chapter Downloads

View Full Metrics

Abstract

This work presents the objectives, methodologies, and preliminary outcomes of the first training activity (TA1) within the AIM@VET project, an EU initiative aimed at integrating artificial intelligence (AI) into vocational education and training (VET) to align with labor market demands. Addressing the noticeable gap in AI education across various educational levels, AIM@VET, involving six partners from Spain, Portugal, and Slovenia, focuses on developing teacher-centered learning modules in key AI application areas: computer vision, robotics, and ambient intelligence. The project’s methodology involves universities in content preparation and VET teachers in content delivery to students, with an iterative feedback loop enhancing the curriculum’s relevance and effectiveness. TA1 demonstrated a practical approach to applying AI concepts through a mix of theoretical lessons and hands-on tasks, significantly improving students’ technical AI skills and readiness for the digital workforce. The activity underscored the importance of standardizing lesson creation protocols to produce a unified curriculum, thereby facilitating improved coordination among partners. This chapter will detail the project’s framework, its execution, and an analysis of the results obtained in the project’s first steps.

Keywords

  • AI Literacy
  • artificial intelligence education
  • artificial intelligence modules (AIM)
  • Erasmus+ program
  • project-based learning
  • digital transformation
  • vocational training
  • robotics
  • ambient intelligence
  • computer vision

1. Introduction

The technical revolution behind artificial intelligence (AI) impacts the society in a global way. Consequently, it is mandatory to educate the whole population in the fundamentals of this digital technology, not only to increase their job opportunities but as a fundamental right to be aware of the decisions that are taken by AI-based systems [1]. In this scope, the development of AI literacy is a key goal for most educational administrations and organizations around the world [2, 3]. It implies the definition of the skills and competencies that learners should acquire in this topic, as well as the procedures to achieve them, mainly through specific curricula and resources [4].

Many approaches to AI curricula development have arisen in the last 5 years, mainly focused on pre-university levels (secondary school). They come from official educational administrations [5], private initiatives [6], or directly from universities and schools [7]. All of them are still under development and testing because the creation of a formal and reliable curriculum in a new discipline takes time, reflection, and improvement. Most of the existing approaches are based on the guidelines of the AI4K12 initiative [8] and the five big ideas they propose (Perception, Representation and Reasoning, Learning, Natural Interaction, and Societal Impact). These ideas make up wide areas where educators should focus on training secondary school students in the fundamentals of AI. Following these guidelines, some remarkable educational resources in topics like perception, representation, and, mainly, machine learning can be found [9]. The majority of them have been developed for compulsory levels, with a general educational perspective. The technical background required to follow the lessons is low, as there is no warranty about the students’ digital knowledge and skills in programming, statistics, or logics to be homogeneous in different countries and regions.

The current work is focused on Vocational Education and Training (VET), where the previous considerations must be reformulated [10]. These students have a clear technical focus, mainly practical, and their training must be aligned with the labor market requirements. Therefore, the existing curricula about AI for pre-university levels must be adapted to this scope, or new ones must be created. In addition, the curriculum organization in VET education is not the same as in secondary school, and here the typical teaching units are organized in “learning modules”, which are more specific and shorter. Hence, there are many specific features to consider when developing an AI curriculum for VET. In this realm, some initiatives have arisen, although they are in the early stages of development. From all of them, the TaccleAI project must be highlighted, which has been analyzing the particularities of AI education in VET for the last few years. The main conclusions of their final report have been taken here as a guideline for future development [11]. A more detailed description of the AIM@VET project and its objectives can be found in [12].

The authors of this paper were granted an Erasmus+ project in 2022, called AIM@VET, with the aim of developing AI learning modules adapted to VET education in three areas: autonomous robotics, computer vision, and ambient intelligence. The project encompasses one VET school and one university from Spain, Portugal, and Slovenia, leading to a team of six partners. They have been organized into ‘work islands’—a term referring to specialized working groups, each based in a different country and focused on one of the three distinct AI areas mentioned above. These work islands operate collaboratively yet independently, allowing each group to develop specialized teaching units (TUs) and then share insights and outcomes with other groups. This organizational structure promotes specialization and efficiency in the development of educational content tailored to VET education while facilitating broad collaboration among partners across different countries. Each university team creates specific teaching units (TUs) in their particular area, which are revised by the teachers at the corresponding VET school and tested with their students. From these results, the university team obtains feedback to improve the TUs. At the project’s end, the TUs will be organized into three independent learning modules, and they will be published on the project website to make them available to the entire teaching community [13].

A key step in this project was the first Training Activity (TA1), held in Slovenia, as it tested the TUs developed by the different ‘work islands’ in a novel context. TA1 facilitated a deeper level of evaluation by involving students from various ‘work islands’, thus enabling the TUs to be tested across a more diverse student body. Specifically, the initial two teaching units, TU1 and TU2, which were developed as part of the project, were shared with and tested by students from mixed nationalities and backgrounds in collaborative groups. This approach is not only aimed at assessing the TUs’ effectiveness but also at fostering an environment of cross-cultural learning and exchange.

The methodology and outcomes of TA1 are presented here, with a focus on analyzing the application of the developed TUs and their impact on the students. Specifically, the TU described here introduces students to the application of Q-learning in robotics for navigation and obstacle avoidance, basic principles and techniques of image processing and object detection in computer vision, and the integration and control of sensors and actuators in the context of ambient intelligence.

Advertisement

2. Methodology

This section details the first Training Activity (TA1) within the AIM@VET project, focusing on its dual role: assessing the effectiveness of teaching units from each work island and enhancing collaborative learning among diverse AI specialization students. TA1 was fundamental for evaluating lesson functionality, sharing challenges and insights, and introducing lessons to new students. It also proved very useful for refining practical aspects such as coordination, standardization of lesson format, content, and pedagogical philosophy.

2.1 Description of the activity

TA1 focused on the core themes of computer vision, robotics, and ambient intelligence, each managed by one of the three work islands. Each island was responsible for creating scenarios and tasks within their respective domain. The activity encompassed 18 hours of teaching for each thematic area, culminating in a 1:30-hour joint session for presentations and feedback exchange among students. This final session allowed students from different work islands to share their results and insights.

2.2 Participants

Participants in TA1 included students from each of the work islands, encompassing a range of educational backgrounds and AI expertise levels. This diversity was intentional to foster cross-disciplinary learning and interaction. In line with this approach, the participants were strategically mixed across different thematic areas, with each work island delivering lessons to students from the other islands. This structure, detailed in Table 1, was designed to encourage cross-disciplinary learning and foster a collaborative educational environment.

Teaching IslandLesson FocusStudent Groups
SpainRobotics6 students (3 Slovenia, 3 Portugal)
SloveniaComputer Vision6 students (3 Spain, 3 Portugal)
PortugalAmbient Intelligence6 students (3 Spain, 3 Slovenia)

Table 1.

Distribution of mixed teaching approach in TA1.

2.3 Target age group and prerequisites

This activity is tailored for VET students aged 15–20. To accommodate this broad age range effectively, clear prerequisites are established to ensure students possess the foundational knowledge necessary for the proposed challenges. These prerequisites are:

  • Programming: A basic level of programming, including an understanding of conditionals, loops, variables, and functions.

  • Python: Basic familiarity with the Python programming language, suitable for engaging with simple programming tasks.

  • Mathematics: Knowledge of Cartesian coordinates, angles, and time measurements is important for the mathematical aspects of the challenges.

These prerequisites aim to equip students with the essential skills required to engage with and benefit from the learning modules, ensuring a productive and enriching educational experience.

2.4 Feedback mechanisms

In TA1, several feedback mechanisms were employed to evaluate and enhance the effectiveness of the training sessions.

  1. General Lesson Evaluation Templates for Students: These templates were used by teachers to assess student performance across the entire training activity unit. The evaluation focused on various aspects of the task, with teachers assigning scores based on a detailed rubric. This approach ensured a consistent and reliable method of evaluating student progress across different curriculum units. The criteria used in these templates included adequate selection of information, time management, design and construction of the solution, creativity, testing and debugging, programming and code, and teamwork.

  2. Specific Evaluation - Questionnaires: Following the completion of each unit, students were required to answer questionnaires that focused on specific aspects of the lesson, including theory and implementation. These questions aimed to gauge the students’ understanding and application of the material covered in the TU.

  3. Teacher Feedback: Teachers filled out templates evaluating each section of the lesson, with a focus on potential improvements. They provided assessments on the duration of each task, the theoretical content provided, and evaluation indicators for student performance. Teachers were also encouraged to propose new or modified evaluation indicators, contributing to the ongoing refinement of the teaching units.

These feedback mechanisms were integral to the continuous improvement of the teaching units and overall training activity, ensuring that both theoretical and practical components of the curriculum were effectively delivered and assimilated by the students. The evaluation templates presented here can be found in the modules section of the project website, accessible at [13].

Advertisement

3. Description of the teaching units

A cooperative Project-Based Learning (cPBL) approach has been used to implement the TUs that make the AIM@VET modules. Students are organized in teams to face a challenge that must be solved by means of an AI system they program in Python language. According to the PBL methodology, each TU establishes a set of design specifications, including a video displaying the final response the AI system must show, and provides a tentative organization in steps. The approach is based on a “hands-on” learning perspective in which the theoretical concepts are reinforced as they are required to solve a practical issue (learning by doing), and it is key for VET education.

All the materials associated with these TUs are available at [14], including a main file with the teacher’s guide in pdf, the demonstrative video of the expected solution, the code with the Python programs, and other supportive materials.

3.1 Robotics

The current resource corresponds to the second TU of the robotics module of the AIM@VET project, focused on mobile robotics. The ultimate challenge of this TU is to enable a mobile robot to move autonomously within an industrial-like scenario, avoiding the risks of obstacles and floor level changes, and aiming to solve a patrolling task in an autonomous way. In the following subsections, more details regarding this resource and its scope in AI teaching for VET education are provided.

3.1.1 Temporal organization and required resources

The robotics unit is structured to be completed over 15 hours and tailored for students who have met the specified prerequisites. The curriculum is segmented into four main tasks, with the following time allocation: 5 hours for Task 1, 3 hours for Task 2, 4 hours for Task 3, and 3 hours for Task 4. This structure is designed to provide an in-depth exploration of each topic.

  • In Task 1, students engage in learning how to detect and avoid potential hazards in the environment, with a significant emphasis on practical application and understanding.

  • Task 2 introduces the basics of Q-learning, allocating sufficient time for both theoretical discussions and practical exercises.

  • Tasks 3 and 4 continue to explore advanced concepts and applications of Q-learning, with time allotted for comprehensive engagement with the content, hands-on activities, and problem-solving.

Each task commences with a theoretical introduction by the teacher, detailing the scope and objectives. Students are provided with specific time frames to work on the challenges, encouraging active engagement and the development of problem-solving skills. Upon completion, collective discussions are held to review possible solutions, facilitating reflective learning and consolidation of knowledge.

The following resources are necessary to facilitate this TU:

  1. Each student needs access to a laptop or computer capable of programming with Python.

  2. The RoboboSim simulator should be downloaded from the Robobo wiki [15] and installed on each student’s computer.

Robobo [16] is an educational robot comprised of a mobile base connected to a smartphone via Bluetooth, forming a single robotic platform with high technical specifications, low cost, and long durability. As depicted on the left side of Figure 1, Robobo is equipped with a high-resolution camera, microphone, tactile screen, speaker, WI-FI, Bluetooth, a powerful CPU, and other relevant features for teaching AI at all levels. It supports programming in Scratch and Python and is compatible with ROS [15], offering libraries for computer vision, sound processing, speech production, and recognition, among others.

Figure 1.

Left: Robobo robot. Right: RoboboSim interface.

Furthermore, different simulation tools suitable for various educational levels and skills are available [17]. One of these tools is RoboboSim [15], a 3D realistic simulator used in this TU to introduce students to reinforcement learning. Developed using Unity technology, RoboboSim provides a computationally light and engaging learning environment with video game-like usability and aesthetics. It supports most of the sensors and actuators of the real robot, especially those required for the challenges presented in this activity. The right image of Figure 1 shows a snapshot of one RoboboSim simulation world, demonstrating its capabilities and interface.

3.1.2 AI concepts addressed

Throughout this TU, students will learn the fundamentals of reinforcement learning by means of implementing a specific algorithm: Q-learning. They will use the Q-table to guide the robot’s decision-making process. By updating the Q-values using the Bellman equation [18], the robot can learn from its experiences and improve its actions over time. In addition, students will explore how the robot actuators enable physical movements and actions, while the sensors provide important information about the environment. This knowledge about the interaction of the robot with its environment is essential for the effective integration of sensor feedback into the robot’s decision-making. The robot learns a control program through a reward-based approach utilizing Q-learning. Specifically, through iterative experimentation, Robobo employs the Q-learning algorithm to optimize its actions and autonomously navigate toward illuminated areas. But it is important to note that other control programs required to support Q-learning have been previously developed by students, and they remain predefined during the reinforcement learning process. These control programs correspond to obstacle avoidance and fall prevention. They are programmed by the designer to ensure the safety of both the robot and its environment, a key aspect of autonomous robotics for VET education in terms of the legal, ethical, and responsibility considerations that these students must learn to face real-world jobs.

3.1.3 Challenge description

The primary goal of this Teaching Unit (TU) is to equip Robobo with the capability to autonomously navigate toward illuminated areas within unknown environments, identifying obstacles and avoiding falls. This requires students to program Robobo, blending predefined safety controls with the Q-learning algorithm for navigation. The simulation environment for this challenge, including obstacles and light sources, is depicted in Figure 2.

Figure 2.

Simulation environment used in the teaching unit.

Students are tasked with implementing a Python program that integrates control programs with the Q-learning algorithm, enabling Robobo to move autonomously and safely. This involves using Robobo’s sensors to measure light levels from various directions and determine the robot’s next move based on the light intensity changes, rewarding actions through the Bellman equation, and updating the Q-table accordingly. This setup illustrates the balance between AI adaptability and safety, a crucial aspect of this TU. To achieve these objectives, the TU is structured into focused subtasks, each building upon the last to gradually enhance students’ understanding and skills in autonomous navigation and AI adaptation:

  1. Autonomous Patrolling in an Industrial-like Setup: Students start by familiarizing themselves with Robobo’s sensors and actuators – wheels for movement, pans for smartphone rotation, IR sensors for distance measurement, and the light sensor for ambient light level detection. A theoretical background in reinforcement learning and the Q-learning algorithm is provided, encouraging concept acquisition through challenge-solving.

  2. Autonomous Safety: Avoiding Obstacles and Falls with IR Sensors: The initial task is to program Robobo to autonomously navigate the environment shown in Figure 2, avoiding obstacles and falls. This involves dynamic adaptation to sensory inputs, with a Python template (shown in Figure 3) guiding the implementation of fall and collision avoidance behaviors.

  3. Evaluating the State of the Robot: Using the Light Sensor to Define Orientation Toward the Light: Students develop a function to determine Robobo’s state based on light direction, introducing them to the concept of “state” in reinforcement learning. This task is essential for setting up the logic behind autonomous navigation decisions.

  4. Autonomous Navigation: Executing Actions Using the Q-Table: This task involves filling in a Q-table (illustrated in Table 2) to link states with actions, ensuring Robobo navigates effectively without encountering obstacles or falls. Students learn to execute actions based on the highest Q-value for the current state, demonstrating the practical application of Q-learning.

  5. Autonomous Learning: Reward Calculation and Q-Table Update: The final task focuses on autonomous learning, where students calculate rewards and update the Q-table post-action (as seen in Table 3). This includes developing a function for reward calculation based on light intensity changes and updating the Q-table using the Bellman equation, enhancing Robobo’s navigation strategies.

Figure 3.

Python template to face task 1 (left) and a possible solution for the teacher (right).

Q (S,A)A0 (Turn right)A1 (Turn left)A2 (Go straight)
S0 (Right)100
S1 (Left)010
S2 (Front)001

Table 2.

Q-table manually filled by students.

Q (S,A)A0 (Turn right)A1 (Turn left)A2 (Go straight)
S0 (Right)1.020.040.04
S1 (Left)0.551.30.34
S2 (Front)1.171.041.83

Table 3.

Final values of the Q-table obtained from the execution.

By sequentially addressing these subtasks, the TU offers a comprehensive learning experience, from sensor-based obstacle avoidance to sophisticated decision-making via Q-learning, ensuring students gain a thorough understanding of autonomous navigation in robotics. For a more detailed account of this implementation, refer to [19].

3.1.4 Outcomes

The initial implementation of this teaching unit occurred between April and May 2023 at CIFP Rodolfo Ucha Piñeiro in Spain, involving six vocational education and training (VET) students with prior RoboboSim experience. During this phase, all students completed task 1, four managed task 3 by identifying states and executing actions from the Q-table, and one student successfully applied the Bellman equation (see Figure 4) [18]. Time constraints were noted as a significant challenge, but the engagement and conceptual understanding of the students were evident.

Figure 4.

Level of fulfillment of the tasks by the students among implementations.

Feedback highlighted the teaching unit’s effectiveness, with students finding the balance of theory and practical tasks engaging. Despite some struggles with code errors, likely due to limited programming backgrounds, the overall response was positive. Four students gained new insights into AI, underscoring the importance and interest in the subject matter, which was further supported by teacher observations.

An analysis based on a general evaluation template and student questionnaire revealed that the theoretical content’s complexity was seen as challenging but manageable, with students’ perceptions evenly distributed across the complexity spectrum (see Figure 5).

Figure 5.

Complexity rating of the TU contents perceived by students.

Following an in-depth feedback review, we noticed several key issues with the original teaching unit (TU), necessitating a comprehensive review. Key challenges identified included the necessity for clearer, simpler explanations for advanced topics; sufficient time allocation for exploring complex subjects; a solid foundational understanding of mathematical concepts, notably matrices essential for Q-learning; and the need for explicit guidance in applying theoretical concepts, particularly the Bellman equation [18].

To address these challenges, we introduced significant enhancements in the TU:

  • Enhanced Clarity and Accessibility: We refined the TU’s language for greater accessibility and introduced explicit examples to explain complex ideas such as reward systems and state transitions within Q-learning.

  • Optimized Time Management: We extended the time allotted to basic concepts, acknowledging the complexity of Q-learning, while making supplementary tasks optional.

  • Mathematical Foundations Support: We provided students with essential coding tools, including ready-made Q-tables and the Bellman equation [18], facilitating a deeper comprehension of the mathematical concepts.

  • Comprehensive Theoretical Application Guidance: The updated TU provides a detailed guide through critical tasks, with clear explanations of key operations such as reward calculation and Q-table updates and thorough clarification of function parameters and expected results, minimizing ambiguity.

At the training activity, a second implementation of the teaching unit took place with three students from Portugal and three students from Slovenia. Among the six students, five were male and one was female, with ages ranging between 15 and 18 years. Their digital competencies were slightly weaker than those of the Spanish group. Although they were familiar with programming fundamentals, their assimilation of these concepts was not as strong. The materials and questionnaires used in this second activity are accessible at [14].

On the first two days, a total of nine hours were dedicated to introducing RoboboSim and its fundamentals. The third day, with seven hours of class, was dedicated to the specific tasks of this work. During it, students were able to successfully identify states, create their own Q-tables, and execute the corresponding actions. They achieved better results in a significantly shorter time compared to the Spanish group, as displayed in Figure 4. This outcome, coupled with positive feedback from teachers, confirmed that feedback from the initial implementation of the first TUs at CIFP Rodolfo Ucha Piñeiro served to improve the preliminary versions, leading to a refined teaching approach.

After completing the robotics teaching unit with six new students who were new to robotics, our experience provided valuable insights into both learning and teaching within this field. Despite the students’ varied educational backgrounds, they showed notable engagement and adaptability, especially when allowed sufficient time to delve into complex topics.

Teachers’ feedback indicated the positive practical, hands-on approach of the project, though it was also mentioned that a stronger base in programming would be beneficial for future learning. The materials and questionnaires used in this second activity are accessible at [14].

3.2 Computer vision

This work package aims to develop computer vision teaching units for VET students, supporting the project’s main goals. Computer vision, a key AI component, processes images to derive decision-making information. Grasping its concepts and applications is essential for future use in everyday life. Computer vision can be found in diverse areas like industry, smart homes, and autonomous vehicles. Teaching units therefore focus on critical topics such as unbiased data curation and object recognition, blending classical and deep learning approaches [20, 21]. Practical, real-world application examples are emphasized.

3.2.1 Temporal organization and required resources

The development of the TU takes approximately 10 hours if students have the necessary prior knowledge. It has been organized into 8 tasks, with an approximate load of 1 hour for the first 6 tasks and 2 hours for the last two tasks. In the first task, students get familiar with the concepts of computer vision in general. In the following tasks, they learn how to load images, modify them, apply filters, perform simple segmentation, continue with detection, and conclude with basics of tracking. For each task, there is a theoretical introduction where the teacher presents the topic, and small challenges are provided to the students with possible solutions discussed collectively. The required resources for this activity are: (1) A computer per student to program with Python in Jupyter Notebooks. (2) Python with necessary libraries such as OpenCV, Pillow, and Numpy. (3) Regular web camera is either available on laptop or plugged into the computer.

Jupyter Notebooks are an open-source web application that allows you to create and share documents containing live code, equations, visualizations, and narrative text. They are incredibly useful for data analysis, machine learning, scientific research, and educational purposes because they allow for interactive computing and data visualization. Jupyter supports over 40 programming languages, including Python, R, and Scala, and integrates with big data processing tools. However, in our case, it was extensively used in conjunction with Python only. One such notebook is shown in Figure 6.

Figure 6.

An example of a Jupyter notebook with the code for students to play with.

The combination of Jupyter, OpenCV, Pillow, NumPy, and others provides a powerful toolkit for processing, analyzing, and visualizing data in an interactive and user-friendly environment. OpenCV, a real-time computer vision library licensed under the open-source Apache 2, excels in elementary computer vision tasks. Complementing it, Pillow extends the capabilities of the Python Imaging Library by enhancing usability and supporting an array of image formats ideal for web integration and batch processing. NumPy ties these tools together with its fundamental array and matrix operations, underpinning scientific packages like SciPy and Pandas, and proving indispensable in fields such as data analysis, machine learning, and engineering. Nevertheless, other libraries are also used to support teaching materials.

3.2.2 AI concepts addressed

Throughout their studies, students will delve into the complexities of computer vision, honing in on the algorithms that enable machines to interpret and understand visual data. By examining the nuances of simple biometric recognition systems, such as detecting face in live webcamera feeds, learners will grapple with the practicalities and challenges of implementing such systems. In the first TA, critical part of their exploration will be the scrutiny of bias in data sets, an issue of crucial importance in not only computer vision but in the wider field of AI. They will learn to identify and mitigate bias, understanding its implications for both the accuracy and fairness of decision-making. As they develop and test biometric algorithms, students will be encouraged to consider the ethical dimensions of AI, such as the potential for discrimination, thereby preparing them for the responsible development of technology in the future.

3.2.3 Challenge description

In this TU, the main goal was to get familiar with basics of common computer vision tasks. This includes loading images, understanding the matrix nature of images (e.g. order of channels), modifying them, performing filtering tasks, doing simple segmentation tasks, trying to detect important parts of images and deriving features out of them, and covering the basics of motion analysis. This establishes a foundational understanding for students in the domain of computer vision and prepares them to understand more advanced topics, such as person recognition [22], the problems of image generation [23, 24], etc. Each topic covered by the tasks is described in more detail below.

Basics of Image Processing. Image processing forms the foundation of computer vision. It involves techniques to enhance raw images for further processing or to extract valuable information. Basic image processing tasks include loading, displaying, and storing images. Understanding how to manipulate the fundamental attributes of an image, such as color channels and intensity levels, is crucial. Grayscale conversion, thresholding, and basic filtering are typical tasks. The importance of these tasks lies in their ability to preprocess images for advanced analysis, improving the performance of complex algorithms in object detection, facial recognition, and more.

Modifying Images. Modifying images involves resizing, cropping, altering brightness and contrast, and geometric transformations. These techniques are critical in preparing images for specific applications. For instance, resizing and cropping can focus on regions of interest, enhancing the efficiency of object detection algorithms. Adjusting brightness and contrast makes features more discernible, which is crucial in medical imaging and surveillance. Understanding these transformations is key to customizing image data for varied applications, from automated inspection systems to augmented reality.

Advanced Image Processing. Advanced processing encompasses techniques like histogram equalization and convolution operations. Histogram equalization is a method for contrast adjustment using the image’s histogram. This is particularly beneficial in improving the visibility of features in images, which is essential in areas like satellite imaging and night-vision processing. Convolutions, involving filters like blurring and sharpening, are fundamental in edge detection and noise reduction, paving the way for higher-level tasks such as feature extraction and image classification.

Image Segmentation. Image segmentation divides an image into meaningful regions, facilitating easier analysis. Techniques like thresholding, region-based segmentation, and watershed algorithms play a significant role. In medical imaging, for example, segmentation aids in identifying and isolating pathological regions. In autonomous vehicles, it helps in understanding the driving environment. The significance of segmentation lies in its ability to simplify complex images into analyzable segments, making it a cornerstone in both automated systems and interpretative analysis.

Feature Detection and Description. This process involves identifying key points in images and describing them in a way that facilitates matching and recognition. Algorithms like SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF) are instrumental in this. These features are pivotal in tasks like image matching, 3D reconstruction, and motion tracking. They provide robust ways to represent and compare images, crucial in applications ranging from augmented reality to navigation in robotics.

Object Detection. Object detection is about identifying and locating objects within images. Techniques like template matching, Haar cascades, and HOG (Histogram of Oriented Gradients) are used for this purpose. Object detection is vital in various domains, such as security surveillance for unauthorized intrusion detection, in retail for inventory management, and in automotive for pedestrian and obstacle detection. The ability to accurately detect objects in diverse conditions is a testament to the evolution and importance of this field.

Optical Flow and Motion Analysis. Optical flow techniques, like the Lucas–Kanade method and dense optical flow, estimate the motion of objects between frames in a video sequence. These methods are crucial in understanding the dynamics of scenes and useful in video surveillance for anomaly detection, in sports analytics for player movement analysis, and in video editing for creating special effects. Optical flow provides a deeper understanding of motion patterns, making it invaluable in temporal analysis of videos.

3.2.4 Outcomes

Computer vision content presented a valuable learning experience for students, highlighting the practical applications and basic theoretical underpinnings of computer vision. Students engaged in a series of tasks that, hopefully, not only solidified their understanding of computer vision fundamentals but also prepared them for real-world applications or at least gave them motivation for it.

As described above, during the course, students demonstrated proficiency in basic image processing techniques, including image loading, manipulation, and the application of filters. They were introduced to more complex topics such as image segmentation, feature detection, and object detection, culminating in an understanding of motion analysis through optical flow methods. This progression from foundational skills to more advanced concepts allowed students to build a comprehensive knowledge base in computer vision.

One of the key outcomes was the students’ ability to modify and apply code made available to them to actual scenarios. For instance, they successfully used and customized algorithms for detecting objects in images, a skill crucial for many applications in industries such as security, automotive, and healthcare. The hands-on approach, facilitated by the use of Jupyter Notebooks and Python libraries such as OpenCV, Pillow, and NumPy, was instrumental in bridging the gap between theory and practice.

Feedback from students and teachers alike underscored the relevance and engagement of the teaching units. Students appreciated the practical challenges, which enhanced their learning experience. They found the tasks engaging and relevant (albeit on the difficult side), reflecting the growing importance of computer vision in various technological domains.

However, similar to other subject areas, challenges were encountered. Time constraints were noted as a significant hurdle, with some students requiring more time to fully grasp complex concepts or not being able to grasp certain concepts at all. This feedback has been invaluable in planning future iterations of the teaching unit, with considerations for more flexible scheduling and possibly more in-depth preparatory sessions on programming fundamentals for students new to Python.

The successful completion of tasks by the students, coupled with their ability to understand and apply computer vision concepts, indicates the effectiveness of the teaching unit in achieving its educational objectives. It has laid a solid foundation for students, equipping them with the skills and knowledge to further explore the field of computer vision and its applications.

Moving forward, the curriculum will be simplified and refined based on the feedback received. This includes enhancing the clarity of theoretical explanations, adjusting the pacing of the course to better match student learning speeds, and incorporating more examples of real-world applications to further illustrate the impact of computer vision technologies. The ultimate goal remains to provide students with a robust understanding of computer vision, preparing them for future careers in this dynamic and rapidly evolving field.

3.3 Ambient intelligence

The current resource corresponds to the first and second TUs of the Ambient intelligence module of the AIM@VET project, which focuses on sensors and actuators. The ultimate aim of these TUs is for students to learn how sensors and actuators work, as well as how to control them with a microcontroller Arduino Uno. This is the basic principle of sensorization: to develop an entire intelligent ambient with Internet of Things (IoT) environment that can be controlled automatically and using AI. So, the idea is to understand how data is collected and how actuators work in an environment. The next step is to prepare the intelligent agent to act and react to this environment using AI techniques.

3.3.1 Temporal organization and required resources

The development of Modules 1 and 2, a comprehensive teaching unit on sensorization, requires approximately 24 hours to complete without students having the necessary prior knowledge. However, due to limitations in applying all planned activities during the training session, we have selected five key activities crucial for learning sensorization. Consequently, each of these five activities will require two hours to complete. Moreover, each activity may consist of one or more tasks. The temporal organization of the sensorization teaching unit is thoughtfully structured to facilitate a comprehensive and effective learning experience, focusing the entire curriculum into a total of 10 hours. This approach ensures a dedicated and intensive exploration of the subject matter.

The first activity introduces the foundational concepts within the “Required Knowledge” section, covering Ambient Intelligence (AmI), the Internet of Things (IoT), and the Internet of People (IoP). This introductory phase, lasting about 30 minutes, sets the stage for the hands-on work to follow, ensuring that students have a solid conceptual framework before diving into the more technical aspects. Following the introduction, the activity progresses into more practical tasks. Students start with understanding and setting up the Arduino Uno, a process that takes about 30 minutes. This is a crucial step, as it involves familiarizing themselves with the hardware that will be central to their subsequent activities. Next, they move on to preparing the Arduino Uno for integration with Python. This task, also estimated at 45 minutes, introduces students to the software aspect of their projects, bridging the gap between the physical hardware and the programming environment. A significant portion of the activity is dedicated to hands-on activities involving various sensors. Students spend time learning to use a breadboard, which is essential for assembling electronic circuits. This activity is designed to be brief yet informative, taking about 15 minutes, which finishes the first activity.

The second activity then delves deeper into the application of Python programming with Arduino. Students explore the use of PyFirmata and Pymata libraries, engaging in tasks that require them to build circuits and write code. Each of these tasks is allocated 60 minutes, allowing ample time for experimentation and learning.

The third activity has a unique aspect, which is the inclusion of virtual sensors. Students are introduced to the concept and application of virtual sensors, such as those used for climatological and air quality data. They learn how to access and use data from APIs like Open Weather Map and Open Air Quality, which involve writing Python scripts to fetch and process this data. These activities are given substantial time, with 45 minutes dedicated to learning the required knowledge and 75 minutes for practical implementation.

Throughout the fourth and fifth activity, students are progressively building their skills and knowledge, culminating in a final project or challenge. This project involves the integration of all the skills they have acquired, requiring them to develop a program for collecting data from sensors using Arduino and Python. The project is not only a test of their technical abilities but also of their capacity to integrate various components into a cohesive system.

The sensorization teaching unit necessitates a range of resources that are integral to the successful delivery and completion of the course. These resources span across various categories, including hardware, software, and additional support materials, ensuring that students have everything they need to effectively engage with and complete the unit.

At the forefront of the required hardware is a reliable WI-FI network with internet connectivity. This is crucial not only for accessing online resources and documentation but also potentially for some of the project tasks that might involve internet-based applications or data retrieval. Additionally, each student needs access to a laptop or computer. These are essential for coding in Python, interfacing with the Arduino, and potentially for other software-related tasks.

Smartphones, specifically Android-based, are also listed among the required hardware. The use of smartphones could be linked to interfacing with the Arduino projects or for testing and running applications developed during the course. Furthermore, the unit requires one Arduino Uno and a set of sensors for every two students. Arduino Uno serves as the foundational hardware for teaching sensor integration and data collection, and the sensors are crucial for hands-on experimentation and understanding real-world data acquisition.

In terms of software, Python is a primary requirement, as it is used extensively for programming the Arduino and data processing tasks. Additional software might include the Arduino Integrated Development Environment (IDE) and other related tools or libraries such as PyFirmata, which are used for interfacing Python with the Arduino.

An often-overlooked but equally important resource is a projector, recommended for displaying teaching unit materials and multimedia resources to all students. This facilitates a more interactive and engaging learning environment, allowing instructors to visually demonstrate concepts and procedures.

In addition to these, there might be requirements for basic electronic components such as breadboards, jumper wires, and power supplies, which are standard in any electronics lab setting. These components are necessary for building and testing circuits, and for interfacing the various sensors with the Arduino Uno.

To support the theoretical learning, there may also be a need for textbooks or online resources. These materials would provide additional reading and reference material to students, helping them to deepen their understanding of the concepts taught in the unit.

3.3.2 AI concepts addressed

The teaching unit begins with an introduction to the Arduino Uno, covering both its hardware and software components. Students will then be guided on how to prepare the Arduino for Python programming using PyFirmata, along with an explanation of the Firmata protocol. This foundational knowledge will be further enhanced by teaching them about breadboards and how to prepare them for creating electronic circuits. Students will gain practical experience by building simple circuits, programming them in Python, and exploring different approaches to problem-solving. A significant part of this phase involves the introduction and application of an ultrasonic sensor, which students will program and test to understand its functionality.

Virtual sensors represent another critical area of learning, where students will learn to apply these sensors through API calls and data interpretation. They will engage in tasks that involve accessing climatological and air quality data using APIs like Open Weather Map and Open Air Quality, enabling them to work with real-world data in a controlled environment.

Further into the materials, students will explore serial data communication with Arduino, learning to use Digital Read Serial with Python. This will include practical exercises like programming push buttons and LEDs, providing a deeper understanding of digital signal processing. Additionally, the course will cover the control of motor speed using Pulse Width Modulation (PWM), where students will experiment with DC motors and servo motors, learning to adjust their speed through programming in Python.

This structured approach will not only equip students with the necessary technical skills but also foster a deeper understanding of AI concepts through interactive and engaging activities.

3.3.3 Challenge description

The challenge presented in the sensorization teaching unit is a comprehensive and integrative project that encapsulates the skills and knowledge acquired throughout the course. The final objective of this challenge is to develop a program that allows for the collection of data from sensors using Arduino and Python, effectively combining hardware interfacing with software programming in order to, in future, use captured data from sensors, treat it with AI algorithms, and after that actuate on the environment.

Throughout the unit, students are introduced to various concepts and practical skills, starting with an understanding of Ambient Intelligence, the IoT, and the Internet of People. They learn the basics of Arduino Uno, setting it up, and preparing it for integration with Python. An essential part of this is learning to use a breadboard for circuit assembly, which is crucial for any hands-on electronics project.

As students progress, they delve into using specific libraries like PyFirmata and Pymata, which are instrumental in controlling the Arduino through Python. The challenge involves working with various sensors – such as ultrasonic, temperature, and humidity sensors – and understanding how to implement them in practical applications. This not only includes the physical assembly and programming of these sensors but also the interpretation of the data they collect.

A significant part of the challenge is working with virtual sensors. Students are expected to access climatological and air quality data using APIs such as Open Weather Map and Open Air Quality. They develop Python scripts to fetch and process this data, integrating it with the information collected from physical sensors.

The culminating project requires students to bring together all these elements. They need to design and program a system that can collect data from both the physical sensors attached to the Arduino and the virtual sensors accessed via APIs. This task tests their technical skills in programming and electronics, as well as their ability to process and interpret data. The challenge is not just about building a functional system but also understanding the practical applications of this technology in real-world scenarios.

The project is evaluated on various fronts, including technical skill in programming and circuit assembly, the effectiveness of data collection and processing, and the overall integration of different technologies and concepts. This challenge serves as a practical demonstration of the students’ learning and proficiency in sensor technology and its applications in the digital world.

3.3.4 Outcomes

In completing the challenge of the sensorization teaching units, students are expected to achieve several significant outcomes that demonstrate their understanding and application of the concepts and skills learned throughout the course. Throughout the sensorization teaching unit, we have employed a methodologically rich and diverse approach to learning, which has effectively blended theoretical knowledge with practical skills. This multifaceted methodology has been pivotal in ensuring a comprehensive understanding of sensor technologies, Arduino programming, and Python integration. Here’s a reflection on what we have learned so far methodologically and how this approach will continue to evolve and be applied as the course progresses.

Firstly, they will have developed a comprehensive program that successfully integrates Arduino and Python to collect data from various sensors. This involves not only the technical aspects of programming and circuit design but also a nuanced understanding of how different sensors operate and interact. By working with both physical sensors (like ultrasonic, temperature, and humidity sensors) and virtual sensors accessed via APIs, students will showcase their ability to handle diverse data sources. Figure 7 is an example of a simple circuit.

Figure 7.

An example of simple activity.

So far, the course has emphasized a hands-on learning approach. This method is crucial in technical subjects, allowing students to directly interact with the tools and technologies they are studying. By working with Arduino and various sensors, students have gained practical skills that are often difficult to fully grasp through theoretical study alone. This hands-on approach not only aids in solidifying the students’ understanding of sensor technologies but also enhances their problem-solving and critical-thinking abilities.

An important outcome of this challenge is the practical application of the collected data. Students must demonstrate how to process, interpret, and potentially visualize this data in a meaningful way. This might involve identifying patterns, making predictions, or simply understanding the environmental conditions measured by the sensors. The ability to translate raw sensor data into usable information is a crucial skill in many fields, including IoT applications, environmental monitoring, and robotics.

The project also tests the students’ problem-solving and critical-thinking skills. They need to integrate different technological components and concepts into a cohesive system. This includes troubleshooting issues that arise during the development process, whether related to hardware, software, or the interfacing between the two.

Additionally, students will have gained hands-on experience in programming with Python and working with Arduino, which are valuable skills in the fields of computer science, engineering, and technology. Their exposure to working with APIs to access virtual sensors like Open Weather Map and Open Air Quality broadens their understanding of how large-scale data can be accessed and used. The incremental learning model adopted in the course has been another key aspect of our methodological approach. We started with basic concepts and gradually moved to more complex topics. This step-by-step progression has ensured that students build a strong foundational knowledge before tackling more advanced aspects of sensor technology and programming. Such a structured approach helps in maintaining clarity and focus, preventing students from feeling overwhelmed by the complexities of the subject matter.

Integration of theory and practice has been a hallmark of our methodology. Theoretical concepts about Ambient Intelligence (AmI), the Internet of Things (IoT), and the role of sensors in digital transformation have been seamlessly tied to practical tasks and projects. This has enabled students to see the real-world applications of their learning, making the educational experience more relevant and engaging.

Lastly, the challenge offers an opportunity for creativity and innovation. Students can explore various applications of the technology, suggest improvements, or even propose new ways of using the sensor data. This fosters a mindset of exploration and innovation, which is crucial in the rapidly evolving field of technology. Also, collaborative learning has also been a focus. While the course has encouraged individual skill development, it has also fostered an environment where students can share knowledge, work together on projects, and learn from each other. This blend of individual and collaborative learning enriches the educational experience and prepares students for the teamwork-oriented nature of the professional world.

Looking forward, the course will continue to build upon these established methodological foundations. We will introduce more complex and challenging projects, encouraging students to apply their learned skills in new and innovative ways. These projects will not only reinforce the students’ existing knowledge but will also push them to explore the limits of their creativity and problem-solving skills.

We will also continue to emphasize the real-world application of the skills and knowledge gained. Students will be encouraged to think about how the technologies and methods they are learning can be applied to solve real-world problems. This focus on practical application is intended to bridge the gap between academic learning and professional skills.

Advertisement

4. Results and discussion

This last section will go over feedback and learning outcomes from Teaching Activity 1 (TA1), pulling together what we’ve learned from areas like robotics, Ambient Intelligence, and computer vision. We’ll look at what worked well, what challenges we ran into, and how we can make our teaching methods better for this AI curriculum.

One of the elements that proved to be essential to take good care of was the engagement of students and their progress in learning. Feedback from these sessions emphasized the critical need to allow students enough time to thoroughly explore and understand complex topics. Given adequate time, students were not only able to comprehend the foundational concepts but also found the tasks to be highly engaging. This underscores an important lesson: the pace at which new information is introduced must be thoughtfully aligned with the content’s complexity, ensuring students remain fully engaged and able to grasp the material comprehensively.

There were, however, some important challenges to highlight, particularly when some students struggled with basic mathematical concepts necessary for progressing in more sophisticated tasks. This difficulty revealed a gap in the foundational knowledge of certain participants, highlighting the importance of implementing educational strategies that equip all students with a robust understanding of essential mathematical principles before moving forward with more complex subjects.

Therefore, important aspects noted based on comprehensive feedback focus on making complex subjects more accessible and engaging for all students. This included simplifying the instructional language to ensure clarity, adjusting the curriculum to allow more time for foundational concepts, providing practical tools like code templates for bridging gaps in mathematical understanding, and offering detailed walkthroughs for the theoretical application of concepts.

At the end of the TA1, a questionnaire was administered to the students and the teachers regarding various aspects of the workshop. Tables 4 and 5 show the quantitative results provided.

QuestionRating
Do you think you had enough prior knowledge to tackle this lesson?3.83
How would you rate the theoretical explanations? (1 star very easy, 5 stars very difficult)3.28
How would you rate the level of difficulty of the challenges?3.06
Do you think there has been a good balance between theory and practice? (1 star: too much theory; 5 stars: too much practice)2.89
How interesting do you think it is to work with a partner compared to working alone?3.50
What do you think about the number of days dedicated to workshops (3 days)? (1 star: too short; 5 stars: too long)2.72
What do you think about the number of hours per day dedicated to workshops? (1 star: too short; 5 stars: too long)4.00
What do you think about the number of hours per day dedicated to activities? (1 star: too short; 5 stars: too long)2.78
Do you feel that this lesson has increased your understanding of AI?3.89
Please rate your overall experience of the first training activity4.17

Table 4.

Quantitative results from students’ questionnaires.

QuestionRating
Do you think the students had sufficient prior knowledge to tackle this lesson?2.50
Do you believe the students have acquired relevant knowledge?3.33
How many difficulties have they encountered due to the language barrier? (1 star: few difficulties; 5 stars: many difficulties)1.83
How positive has it been to work in international pairs?4.33
How positive has it been to change partners in the middle of the activity?4.33
How many students have actively participated and engaged in the activity?4.50
What do you think about the number of days dedicated to workshops (3 days)? (1 star: too short; 5 stars: too long)2.67
What do you think about the number of hours per day dedicated to workshops? (1 star: too short; 3 stars: ideal duration; 5 stars: too long)4.17
What do you think about the number of hours per day dedicated to cultural and social activities? (1 star: too short; 3 stars: ideal duration; 5 stars: too long)2.83

Table 5.

Quantitative results from teachers’ questionnaires.

The student feedback from the AI training session revealed a positive overall experience with a notable increase in AI understanding. However, while students felt somewhat prepared and engaged with the material, they also indicated that the number of workshop hours per day was higher than ideal. Despite this, the practical elements and partner collaboration were well-received, suggesting a successful integration of AI learning into the curriculum with room for adjustment in scheduling.

Teachers’ feedback from the questionnaire indicates a mixed assessment of the AI training program. They observed that students came with a moderate understanding of the subject matter. The acquisition of relevant knowledge during the course was satisfactory. Language barriers did not significantly impede learning. Collaborative aspects of the program, such as working in international pairs and rotating partners, were highly regarded. Student engagement was notably high. However, teachers felt that the workshop days were slightly longer than ideal, and the hours allocated to workshops each day were deemed excessive. Conversely, they considered the time for cultural and social activities to be marginally insufficient.

The synthesis of feedback from Teaching Activity 1 (TA1) highlights a generally positive experience from both students and teachers, marking the project’s success in integrating AI into vocational training. Students valued the practical application of AI, noting an effective balance between theory and practice, and showed keen interest in exploring more AI tools like ChatGPT. On the other hand, teachers pointed out the students’ need for stronger foundational knowledge in AI and suggested shorter workshop hours for future sessions. Overall, the positive reception of TA1 underscores its contribution to enhancing AI education in vocational settings, reflecting its alignment with the evolving requirements of the job market.

Advertisement

5. Conclusions

The first Training Activity (TA1) within the AIM@VET project showcased a practical approach to integrating Artificial Intelligence (AI) into Vocational Education and Training (VET) curricula. Focusing on autonomous robotics, computer vision, and ambient intelligence, TA1 provided VET students with direct experience in applying AI concepts and techniques. This activity not only improved their technical skills in AI but also equipped them with essential knowledge of the impact of AI technologies, preparing them for the digital workforce.

The structured methodology adopted for the first Training Activity (TA1), involving a mix of theoretical lessons and practical tasks, has proven effective among students. The collaborative and cross-cultural learning environment also contributed to the educational experience, promoting a broader perspective on the impact of AI technologies.

Feedback from both students and teachers underscores the success of the project in achieving its educational objectives. Students have gained not only a theoretical understanding of AI principles but also hands-on experience in applying these concepts through projects and challenges. The positive reception of the teaching units, as evidenced by the high level of student engagement and interest, indicates a strong appreciation for the relevance and practicality of the curriculum.

Teachers have highlighted the importance of a balanced approach between theory and practice, and the feedback received has been instrumental in refining the teaching units for future implementations. The adjustments made based on this feedback, such as simplifying language, allowing more time for complex topics, and providing practical tools for mathematical concepts, have enhanced the accessibility and effectiveness of the curriculum.

The initial Training Activity (TA1) within the AIM@VET project was also key in refining our protocols for lesson creation and standardization. This refinement led to a more unified and standardized curriculum, improving coordination among project partners. The protocols addressed the creation of templates and set criteria for content creation, student assessment, and the standardization of complexity levels, lesson durations, and prerequisite knowledge.

The move toward standardization was necessary for collaborative curriculum development across different research groups. It brought together the work of various teams, resulting in a coordinated approach to educational content. Additionally, it allowed for the recognition of each partner’s work and conceptual modules, finding a common base that will aid in future curriculum development with multiple working groups.

As these protocols are further refined and utilized, they also establish a framework for collaborative and coherent curriculum design, not only in AI education within VET programs but in general in any project-based learning curriculum.

Therefore, the AIM@VET project will serve as a model for future initiatives aimed at integrating AI education into VET programs. The insights gained from TA1 offer valuable guidance for educators and policymakers on how to design and implement AI curricula that are both engaging and informative. The project’s focus on practical skills, collaboration, and cross-cultural exchange aligns with the evolving needs of the workforce, preparing students for the challenges and opportunities of the digital age.

Advertisement

Acknowledgments

The authors wish to acknowledge the Erasmus+ Programme of the European Union (grant 2022-1-ES01-KA220-VET-000089813).

References

  1. 1. Dignum V, Penagos M, Pigmans K, Vosloo S. Policy Guidance on AI for Children. New York City, USA: UNICEF; 2021 [Online]. Available from: https://www.unicef.org/globalinsight/reports/policy-guidance-ai-children [Accessed: July 24, 2023]
  2. 2. EC. Digital Education Action Plan 2021-2027. Brussels, Belgium: European Commission; 2021 [Online]. Available from: https://education.ec.europa.eu/focus-topics/digital-education/digital-education-action-plan [Accessed: July 24, 2023]
  3. 3. Miao F, Holmes W, Huang R, Zhang H. AI and Education: Guidance for Policy-makers. Paris, France: UNESCO; 2021. [Online]. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000376709 [Accessed: July 1, 2024]
  4. 4. Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence. 2021;2: 100041
  5. 5. UNESCO. K-12 AI curricula: A mapping of government-endorsed AI curricula. 2022 [Online]. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000380602.locale=en [Accessed: July 24, 2023]
  6. 6. M. Raise. MIT raise. [Online]. Available from: https://www.dayofai.org/curriculum [Accessed: July 24, 2023]
  7. 7. Bellas F, Guerreiro-Santalla S, Naya M, Duro R. AI curriculum for European high schools: An embedded intelligence approach. International Journal of Artificial Intelligence in Education. 2023;33(2):399-426
  8. 8. AI4K12. AI4K12. [Online]. Available from: https://ai4k12.org [Accessed: July 24, 2023]
  9. 9. Rodríguez-García JD, Moreno-León J, Román-González M, Robles G. Introducing artificial intelligence fundamentals with LearningML: Artificial intelligence made easy. Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality. New York City, USA: ACM; 2020. pp. 18-20
  10. 10. Rott K, Lao L, Petridou E, Schmidt-Hertha B. Needs and requirements for an additional AI qualification during dual vocational training: Results from studies of apprentices and teachers. Computers and Education: Artificial Intelligence. 2022;3:100102
  11. 11. Attwell G, Bekiaridis G, Deitmer L, Perini M, Roppertz S, Stieglitz D, et al. Artificial intelligence and vocational education and training: How to shape the future [Online]. Available from: https://taccleai.eu/artificial-intelligence-vocational-education-and-training-policy-recommendations/ [Accessed: July 24, 2023]
  12. 12. Prieto A, Guerreiro S, Bellas F. Aim@vet: Tackling equality on employment opportunities through a formal and open curriculum about AI. In: Computational Intelligence in Security for Information Systems Conference. Berlin, Germany: Springer; 2023. pp. 228-237
  13. 13. AIM@VET. AIM@VET web [Online]. Available from: https://aim4vet.udc.es [Accessed: July 24, 2023]
  14. 14. A. Repositories. Teaching unit 2 and training activity 1 [Online]. Available from: https://bit.ly/3Or7StX;https://bit.ly/3DtuFyM [Accessed: July 24, 2023]
  15. 15. Page RW. Robobo wiki page [Online]. Available from: https://github.com/mintforpeople/robobo-programming/wiki [Accessed: July 24, 2023]
  16. 16. Llamas LF, Paz-Lopez A, Prieto A, Orjales F, Bellas F. Artificial intelligence teaching through embedded systems: A smartphone-based robot approach. In: Robot 2019, Ser. Advances in Intelligent Systems and Computing. Vol. 1092. Springer; 2020
  17. 17. Naya-Varela M, Guerreiro-Santalla S, Baamonde T, Bellas F. Robobo smartcity: An autonomous driving model for computational intelligence learning through educational robotics. IEEE Transactions on Learning Technologies (early access). 2022;16(4):543-559
  18. 18. Wikipedia. Bellman equation [Online]. Available from: https://en.wikipedia.org/wiki/Bellman_equation [Accessed: July 24, 2023]
  19. 19. Renda C, García AP, Bellas F. Teaching reinforcement learning fundamentals in vocational education and training with robobosim. In: ROBOT2023: Sixth Iberian Robotics Conference. Berlin, Germany: Springer; 2023 [Online]. Available from: https://easychair.org/smart-program/ROBOT2023/bytalk-2023-11-24.html
  20. 20. Emeršič Ž, Hrastnik G, Meh Peer N, Peer P. Adapting vet education to labor market needs with focus on artificial intelligence and computer vision. In: ROSUS. Maribor, Slovenia: University Press of the University of Maribor; 2023. pp. 77-86
  21. 21. Kirn VL, Emeršič Ž, Hrastnik G, Meh Peer N, Peer P. Introductory computer vision teaching materials for vet education. In: ROSUS. Maribor, Slovenia: University of Maribor, University Press; 2024
  22. 22. Emeršič Ž, Štepec D, Štruc V, Peer P. George A, Ahmad A, et al. The unconstrained ear recognition challenge. In: 2017 IEEE International Joint Conference on Biometrics (IJCB), Piscataway, New Jersey, USA: IEEE. Colorado, USA: Denver; 2017. pp. 715-724
  23. 23. Meden B, Gonzalez-Hernandez M, Peer P, Štruc V. Face deidentification with controllable privacy protection. Image and Vision Computing. 2023;134:104678
  24. 24. Markićević L, Peer P, Emeršič Ž. Improving ear recognition with super-resolution. In: 2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP). Ohrid, North Macedonia: IEEE; 2023. pp. 1-5

Written By

Žiga Emeršič, Peter Peer, Gregor Hrastnik, Nataša Meh Peer, José María Bey, María Meizoso-García, António Pedro Silva, Cláudia Domingues, Carla Abreu, António Costa, Dalila Durães, Paulo Novais, Cristina Renda and Abraham Prieto

Submitted: 20 February 2024 Reviewed: 01 March 2024 Published: 06 May 2024