Authors
Keywords
Abstract
: This study looks at how robotics and artificial intelligence systems can work better through comprehensive sensor-based data analysis and predictive modeling. This research examines three important sensor parameters (Sensor 1, Sensor 2, and Sensor 3) and their combined impact on system performance indices in AI and robotics applications. Using a dataset of 30 observations, we used two It is possible to use machine learning techniques such as random forest regression and linear regression to predict and analyze the performance effects. The analysis reveals that Sensor 1 shows the strongest positive correlation with the performance index (r = 0.65), followed by Sensor 2 (r = 0.57), while Sensor 3 shows the weakest correlation (r = -0.043). The linear regression achieved an R² it was 0.92 in the training data, but dropped to 0.65 in the test data, indicating potential overfitting concerns.
Random forest regression showed excellent training performance with an R² of 0.97, however, the testing performance decreased to 0.53, indicating challenges in model complexity. Descriptive analysis showed that sensor 1 (mean = 49.47, SD = 25.42) showed the highest variability, while sensor 3 (mean = 10.16, SD = 5.64) showed the most consistent measurements. Performance indices ranged from 8.43 to 76.21, with a mean of 47.11. Correlation heat map analysis confirms the independence of sensor measurements, ensuring minimal multi collinearity. These findings have significant implications for AI and robotics system design, highlighting the importance of prioritizing certain sensor inputs for optimal performance prediction. This research contributes to understanding how multiple sensor parameters interact to determine overall system performance and provides a framework for performance optimization in intelligent automation systems.
- INTRODUCTION
Automation, robotics, artificial intelligence, and machine learning form the theoretical foundation for research on robotics and AI. These technologies can serve as both independent and dependent variables in this body of work. [2] Although academic research on intelligent automation – including robotics and artificial intelligence – is growing rapidly, little is known about how these technologies impact human resource management (HRM) for both workers and businesses. [3] While traditional robots are intended to be general-purpose machines, the fact that building robots that can perform a variety of tasks is still expensive emphasizes the need for intelligent robots. [4] Robotics and artificial intelligence have evolved separately over time.
While robot cists, with backgrounds in mechanical and electrical engineering, specialize in sensory-based tasks, artificial intelligencescholars have focused on algorithms and abstraction issues. [5] Early technological advances, despite their initial setbacks, freed humans from repetitive, mundane tasks. Self-driving cars, FDA-approved surgical robots, and artificial intelligence (AI) systems that mimic disruptive human behavior, “trigger-inducing” ways are just a few examples of cutting-edge technologies that society is still struggling to integrate decades later. [6] In addition to issues of ownership, governance, and regulation, Legislators should immediately enact evidence-based policies to address advances in robotics and AI. Many repetitive, productive tasks previously completed by people have already been automated. [7] Incorporating AI into regional anesthesia will require a revolutionary approach to patient data management. This includes collecting and integrating digital photographs, prescription databases, shortage codes, cancer databases, surgical outcome registries, and preoperative records. [8] The development and use of robotic surgery is closely linked to advances in artificial intelligence. Urology is at the forefront of advances in laparoscopic surgery, and robotics is used for nephrectomy, prostatectomy, and cystectomy. Previous research on robotic surgery perceptions and patient comfort provides important information about how the general public perceives AI in healthcare.[9] Through a review of the literature and analysis of stakeholder requests, three key areas of strategic interest were identified: artificial intelligence (AI), robotics, and the Internet of Things (IoT). These technologies are critical to the growth of the global economy as well as to safety and security. [10] The goal of efficiency is consistent with artificial intelligence, which differs from previous developments in that it automates cognitive processes as opposed to manual ones. AI enables computational tasks to be performed more quickly and smoothly due to advances in computer science and the digital revolution.[11] Many of these ethical issues are already having an impact on society, and many more are on the horizon, even if some seem futuristic and unrelated to common technologies such as touchscreen ordering systems, self-checkout kiosks, and automated teller machines. [12] As a result of the rapid as a result of the development of digital technologies, including digital communication, infrastructure, and new innovations, many aspects of social life are changing, including human interactions, labor, behavior, and production and consumption.
The transition to "smart cities" is facilitated by ICT developments that create new opportunities for more efficient and integrated urban management. [13] It has been found that robotics scenarios require more technical development and time before they can be used to help special education students from all three technologies. This is because modern applications are more likely to be found in non-traditional industries such as manufacturing. [14] In addition, several additional European research groups, both within the EU and outside EU-funded projects, are contributing significantly to the AI-robotics synergy. However, the scope of European efforts in this area is not well captured, as robotics has a limited focus on advisory capabilities. [15] With regular interactions, these self-learning and real-time question-and-answer robots can become better at answering questions. [16] The integration of robotics and artificial intelligence (AI) is rapidly emerging as a key component in creating new markets, innovative technologies, and improved performance in previously established technologies businesses. [17] Artificial intelligence and robotics demonstrate the potential for computer awareness, cognition, and reasoning. The development The use of modern technologies has generated a debate about their eligibility for rights, but these rights can only be taken into account in the contextthe duties and obligations that go with them.
2. Materials a nd Method
Materials:
Sensor 1: Sensor 1 refers to a numerical input variable that captures primary measurements from the environment or internal processes of an AI or robotic system. It may reflect physical data such as distance, temperature, or pressure, depending on the application. This parameter is essential because it provides fundamental information that contributes to the system’s perception, decision-making, and adaptive responses. By analyzing Sensor 1, performance models can evaluate how initial conditions affect robotic control, accuracy, and performance.
Sensor 2: Sensor 2 is a numerical input parameter designed to record secondary system or environmental properties related to AI and robotics. It may represent variables such as speed, vibration, load, or force, depending on the environment. This sensor complements Sensor 1 by providing additional dimensions of operational data. Incorporating Sensor 2 improves the robustness of the models, enabling accurate predictions and adaptive strategies. Together with other inputs, Sensor 2 significantly influences the calculation and interpretation of system performance.
Sensor 3: Sensor 3 is another input variable that provides important supplementary numerical data for improving robotic and AI operations. It can capture features such as orientation, energy consumption, angular position, or localized environmental factors. Sensor 3 enriches the dataset by adding depth and variation to the inputs, improving prediction accuracy. Its contribution helps identify complex interactions between multiple signals. By incorporating Sensor 3, AI and robotics models gain resilience, which helps them adapt effectively to various operational conditions.
Performance Index: The performance index is a numerical output variable derived from the inputs of Sensor 1, Sensor 2, and Sensor 3. It reflects the overall performance, accuracy, or efficiency of an AI or robotic system under given conditions. This index combines various sensor measurements into a single evaluation metric. By monitoring and analyzing the performance index, researchers and engineers can identify system strengths, detect inefficiencies, and develop optimization strategies for improved performance and intelligent decision-making.
Optimization techniques
Linear Regression: A statistical method used a valuable technique to predict quantitative outcomes and has been extensively studied in numerous textbooks over time. Although it may seem less exciting than modern statistical learning methods, it is widely used and very relevant. In addition, it serves as a foundation for more advanced techniques, as many sophisticated statistical learning methods can be seen as extensions or generalizations of linear regression. Therefore, a solid understanding of linear regression is essential before exploring more complex approaches. The fundamental ideas of linear regression are examined in this chapter, along with the least squares method commonly used to build a model. Regression serves two primary purposes. First, it is widely used for forecasting and prediction, often with significant overlap with machine learning applications. With regression analysis, the dependent variable 'y' is predicted based on different values of the independent variables. The variable 'x'. This paper focuses on linear regression and multivariate regression, both of which are well suited for predictive modelling. Regression can take the form of simple linear regression or multiple regression, which can be a type a regression. Simple linear regression involves a model with a single independent variable to determine its effect on a dependent variable. It is represented by the equation ? = β₀ + β₁? + ?, which describes the relationship between the variables. In addition, simple regression helps to distinguish the influence of independent variables as distinguished from the interactions between dependent variables.
Random Forest Regression: A useful supervised machine learning technique is random forest regression method used for predictive modelling. This method involves training several decision trees on various dataset subsets and their outputs are averaged to improve the prediction accuracy of the method, not only improving performance but also reducing the computational burden associated with training, storing, and predicting with many individual models. Due to their efficiency, random forests are extremely helpful for jobs involving regression, where continuous values are usually predicted. A "forest" of several independently built decision trees is created using the random forest technique, using the ultimate forecast derived by averaging each tree's outputs. By exposing each tree to slightly different data, this approach helps to reduce variance and increase the over fitting, ultimately improving the generalizability of the model.
3. Analysis a nd Discussion
Table 1. Artificial Intelligence and Robotics
| Sensor1 | Sensor2 | Sensor3 | PerformanceIndex |
| 43.708611 | 32.339518 | 8.3848685 | 48.898525 |
| 95.564288 | 12.673586 | 6.1556316 | 55.83696 |
| 75.879455 | 7.9273217 | 16.746013 | 51.529182 |
| 63.879264 | 47.699849 | 7.7783132 | 76.20631 |
| 24.041678 | 48.453441 | 6.3377557 | 48.463865 |
| 24.039507 | 41.377881 | 11.311226 | 52.705647 |
| 15.227525 | 18.70762 | 3.6775603 | 8.4323559 |
| 87.955853 | 9.3952451 | 16.241743 | 60.559399 |
| 64.100351 | 35.790486 | 2.4164622 | 57.554913 |
| 73.726532 | 24.806862 | 19.750852 | 58.401762 |
| 11.852604 | 10.491721 | 15.672651 | 19.339444 |
| 97.291887 | 27.282961 | 4.7755979 | 58.934633 |
| 84.919838 | 6.5474835 | 1.1049202 | 46.324156 |
| 29.11052 | 45.919418 | 16.493767 | 54.078679 |
| 26.364247 | 16.645099 | 14.43029 | 36.932282 |
| 26.506406 | 34.813503 | 14.851136 | 39.612417 |
| 37.381802 | 19.026998 | 15.654137 | 32.850315 |
| 57.228079 | 28.403061 | 2.4068484 | 45.848292 |
| 48.875052 | 29.601963 | 7.8108488 | 52.090735 |
| 36.210623 | 13.318451 | 3.2015121 | 30.033654 |
| 65.066761 | 48.631308 | 17.398965 | 69.953364 |
| 22.554447 | 39.880977 | 12.842664 | 46.016662 |
| 36.293018 | 47.277452 | 7.2870625 | 54.700269 |
| 42.972566 | 45.267231 | 2.2076087 | 58.412437 |
| 51.046299 | 31.905499 | 6.9086641 | 46.401731 |
| 80.665837 | 46.484341 | 7.1784831 | 74.106245 |
| 27.97064 | 8.9821626 | 14.862517 | 23.915186 |
| 56.281099 | 13.819229 | 13.113592 | 35.075252 |
| 63.317311 | 7.035228 | 17.857042 | 44.708625 |
| 14.180537 | 19.639865 | 9.9720836 | 25.399045 |
Table 1 presents a dataset that demonstrates the relationship between sensor-based input variables – sensor1, sensor2, and sensor3 – and a single output measure, the performance index. Each row represents an observation that the sensor measurements collectively affect the performance of an artificial intelligence or robotic system. The variation in the rows highlights how different combinations of inputs contribute to the system’s performance and accuracy. Sensor1 typically records high values, often ranging from 40 to 95 in many cases. This parameter seems to provide a strong underlying influence on the performance index. For example, when Sensor1 values are high, such as 95.56 or 97.29, the performance index is also elevated, reaching 55.83 and 58.93, respectively. This indicates that Sensor1 represents a central measurement, which can be linked to the underlying processing or primary operating conditions of the system.Sensor2 shows moderate variation, with values extending from approximately 7 to almost 49. It plays a complementary role, and higher Sensor2 values often correspond to an increased performance index. For example, if Sensor1 is 63.87 and Sensor2 is 47.70, the resulting performance index is 76.20—the highest in the dataset. This suggests that Sensor2 may capture a supporting or confirming aspect of system performance. Sensor3 represents a narrow range, but adds important variation, often amplifying or moderating the result. For example, relatively high Sensor3 values, such as 19.75, combined with medium-sized Sensor1 and Sensor2 values, yield a strong performance index of 58.40. Conversely, when all three inputs are low, such as Sensor1 at 15.22 and Sensor2 at 18.70, the performance index drops dramatically to 8.43. The dataset illustrates how the interaction between multiple sensor measurements determines the performance of AI and robotics systems. Higher values across sensors generally lead to stronger effects, while weaker input combinations significantly reduce system performance.
Table 2. Descriptive Statistics
| Sensor1 | Sensor2 | Sensor3 | PerformanceIndex | |
| Count | 30 | 30 | 30 | 30 |
| Mean | 49.473754 | 27.338192 | 10.161027 | 47.110745 |
| Std | 25.416676 | 14.809696 | 5.637825 | 15.681151 |
| Min | 11.852604 | 6.547483 | 1.10492 | 8.432356 |
| 25% | 26.872465 | 13.443645 | 6.201163 | 37.602316 |
| 50% | 46.291831 | 27.843011 | 9.178476 | 48.681195 |
| 75% | 64.825158 | 41.003655 | 15.456232 | 57.125425 |
| Max | 97.291887 | 48.631308 | 19.750852 | 76.20631 |
Table 2 summarizes the statistical properties of the three input variables (Sensor1, Sensor2, Sensor3) and the output variable (performance index) for the 30 observations. These statistics provide an overview of the central tendencies, variance, and ranges across the dataset. The mean value of Sensor1 is 49.47, with a standard deviation of 25.41, indicating that the measurements are widely spread and include both low (11.85) and very high (97.29) values. Sensor2 has a mean of 27.33 and a standard deviation of 14.80, indicating moderate variability across the observations. Sensor3 averages 10.16 with a small spread (standard deviation of 5.63), indicating that its values are more tightly clustered compared to Sensor1 and Sensor2. The performance index, which combines all three sensor inputs, has a mean of 47.11 and a standard deviation of 15.68. This indicates that the system performance varies moderately but is generally centered around mid-range values. The lowest performance index recorded is 8.43, while the highest is 76.20, showing significant differences in results depending on sensor conditions. The quartile values also show that half of the performance index results are between 37.60 and 57.12, emphasizing the system's tendency toward mid-range performance with occasional extreme events.
FIGURE 1. Scatter Plot of VariousArtificial Intelligence and Robotics
Figure 1 shows a scatterplot matrix with histograms for Sensor1, Sensor2, Sensor3, and the performance index. The histograms on the diagonal provide an overview of the distribution of each variable. Sensor1 shows a wide spread from the lowest to the highest values, indicating that its data is evenly distributed across the entire range. Sensor2 also shows a wide spread, but is more heavily clustered around mid-range values. In contrast, Sensor3 shows a narrow spread with a higher concentration between 5 and 15. The performance index scatterplot map reveals that most values are centered in the mid-range, although some high-performance effects are also evident. Scatterplots reveal the extent of the relationships between the variables. The maps involving Sensor1 and the performance index show a visible upward trend, indicating that higher Sensor1 values are often associated with better performance. Although the spread of the points indicates variability, Sensor2 also shows some positive correlation with the performance index. However, Sensor3 does not show a clear linear relationship with the performance index, suggesting that its effect may be weak or nonlinear. Furthermore, the graphs between the sensors show weak or scattered relationships, indicating that each sensor measurement provides unique information rather than overlapping one another. This independence of inputs strengthens the model’s ability to capture different aspects of system performance.
FIGURE 2. Correlation Heat Map Depicting the Relationships Between Process Parameters and Response Variables
Figure 2 presents a correlation heat map illustrating the relationships between three input variables (Sensor1, Sensor2, Sensor3) and the output variable (performance index). The color scale ranges from –1 to +1, where positive values indicate direct relationships and negative values reflect inverse relationships. The analysis reveals that Sensor1 has a strong positive correlation with the performance index (0.65). This indicates that an increase in Sensor1 measurements is generally associated with improved system performance, making it the most influential parameter. Sensor2 also shows a meaningful positive correlation with the performance index (0.57), indicating its supporting role in improving outcomes. Together, Sensor1 and Sensor2 make a significant contribution to predicting the performance trends of the system. In contrast, Sensor3 shows a negligible correlation with the performance index (–0.043). This indicates that Sensor3 has little or no direct influence on overall performance, or its contribution may be nonlinear and not captured by simple correlations. The weak correlations between sensors (–0.12 to –0.19) indicate that each sensor provides independent information without significant overlap.The heat map highlights Sensor1 and Sensor2 as the main factors determining system performance in artificial intelligence and robotics applications, while Sensor3 plays only a minimal role. This finding reinforces the importance of prioritizing certain inputs for model development and system optimization.
Linear Regression
FIGURE 3. Linear Regression PerformanceIndex (Training data)
Figure 3 illustrates the predicted and actual values of the performance index using a linear regression model trained on the dataset. The scatter plot compares the predicted outcomes on the vertical axis to the actual observed values on the horizontal axis, with the dashed line representing the best-case scenario where the predictions match the observations perfectly. Most of the data points are close to the diagonal line, indicating that the model provides accurate predictions for most cases. This alignment demonstrates that the linear regression model has effectively captured the relationship between the sensor inputs (Sensor 1, Sensor 2, and Sensor 3) and the performance index. Points distributed tightly along the line indicate minimal error in the prediction, supporting the reliability of the regression fit. However, some deviations are visible, especially in the mid-range values, where the actual results are around 40–50. These small gaps indicate that although the model performs well overall, there are small underestimations or overestimations for some observations. However, the absence of extreme outliers indicates the robustness and consistency of the model’s predictive ability. The figure highlights that linear regression is a suitable approach for modeling system performance in this case, with strong predictive alignment between actual and estimated values.
FIGURE 4. Linear Regression PerformanceIndex (Testing data)
Figure 4 illustrates the performance of the linear regression model by comparing the predicted performance index values from the test dataset with the actual values. The scatterplot has a diagonal dashed line that represents the ideal situation where the predicted values exactly match the actual values (i.e., the equality line, where predicted = actual). Each blue dot in the plot corresponds to a data point from the test dataset, its horizontal position represents the actual performance index and its vertical position shows the predicted value. In this figure, the plotted data points fall relatively close to the diagonal line, indicating that the linear regression model has made reasonably accurate predictions for the performance index. However, there is a significant deviation from the ideal line, indicating some error in the model’s predictions. For example, one point shows a predicted value that is significantly lower than the actual value, while another point is slightly above the diagonal line. These deviations highlight that while the model captures the general trend of the data, it is not completely accurate and can under- or over-predict in some cases. The limited number of data points indicates that the test dataset is small, which may limit the generalizability and robustness of the estimate. In addition, the relatively tight set of points indicates that the model's predictions are consistent, although additional statistical evaluation (such as R² or RMSE) may be necessary to more accurately measure the accuracy of the model.
Table 3. Performance Metrics of Linear Regression PerformanceIndex (Training, Testing Data)
| Data | Symbol | R2 | EVS | MSE | RMSE | MAE | MaxError | MSLE | MedAE |
| Train | LR | 0.92157 | 0.92157 | 19.75253 | 4.44438 | 3.35976 | 11.97147 | 0.03103 | 2.10082 |
| Test | LR | 0.64880 | 0.69826 | 35.84826 | 5.98734 | 5.92230 | 7.14491 | 0.01873 | 5.51301 |
Table 3 provides a detailed overview of the performance of the linear regression (LR) model in predicting the performance index on both the training and test datasets using various evaluation metrics. Starting with the R² score, which measures how well the variance in the dependent variable is explained by the model, the training data shows a robust value of 0.92157, indicating excellent model fit. However, the R² on the test data drops to 0.64880, indicating that the model performs less effectively on the missing data and may over fit the training set. The explained variance score (EVS) follows a similar pattern, decreasing to 0.92157 for training and 0.69826 for testing, again confirming that the model captures the data patterns better in training than in testing. Both the mean squared error (MSE) and root mean squared error (RMSE) are higher in the test data (35.85 and 5.99) compared to the training (19.75 and 4.44), indicating a larger average error when predicting new events. Similarly, the mean absolute error (MAE) increases from 3.36 (train) to 5.92 (test), reflecting further reduced accuracy.Although the maximum error is actually lower in test (7.14) than in training (11.97), which may be due to outliers in the training set, the mean absolute error (MedAE) is much worse in test (5.51 vs. 2.10), reinforcing the discrepancy. The mean squared log error (MSLE) is lower for both sets, indicating that the relative prediction errors are small, especially for low values.
Random Forest Regression
FIGURE 5.Random Forest RegressionPerformanceIndex (Training data)
Figure 5 presents a scatter plot comparing the predicted and actual PerformanceIndex values from the training dataset using the Random Forest Regression model. Each blue dot represents a training data instance, where the x-axis denotes the actual PerformanceIndex and the y-axis shows the corresponding predicted value. The dashed diagonal line serves as a reference line indicating perfect prediction i.e., where predicted values exactly match the actual values.From the figure, it is evident that the Random Forest model fits the training data very well. Most of the points lie very close to or directly on the diagonal line, indicating minimal error in predictions. This alignment suggests a high level of accuracy and that the model has successfully captured the underlying patterns in the training dataset. The tight clustering of points around the ideal line across the entire range of PerformanceIndex values from lower to higher also demonstrates the model’s strong capability to generalize within the training set.However, while this near-perfect fit on training data is encouraging, it may also raise concerns about overfittinga common issue with ensemble models like Random Forests. Overfitting occurs when a model learns the training data too well, including noise and outliers, which can negatively impact its ability to generalize to unseen (test) data. Therefore, while Figure 5 confirms excellent training performance, it must be interpreted alongside testing performance metrics to assess the model’s overall reliability and generalization power.
FIGURE 6.Random Forest RegressionPerformanceIndex (Testing data)
Figure 6 presents the performance of the random forest regression model by plotting the predicted values against the actual performance index values on the test dataset. The dashed diagonal line represents the best case where the predictions match the actual results perfectly. The three blue dots correspond to the test cases. Although the sample size for testing may seem small (only three data points), the dots are relatively close to the diagonal line, indicating that the model made very accurate predictions on the test set. The proximity of these dots to the line indicates low prediction error and good generalization, at least for the limited test data provided. Unlike typical scatter plots with large datasets, the scatter distribution here limits the ability to draw broad statistical conclusions. However, based on visual evidence, the model’s performance on the test data is consistent with its training performance, as shown in Figure 5. This is a positive sign, indicating that the random forest model may have avoided overfitting, a common pitfall with such models. However, caution is warranted as the size of the experimental dataset is very small, making it difficult to rigorously assess the robustness and reliability of the model. A more comprehensive test with a larger dataset would be required to confidently assess the predictive power of the model. However, based on this number, the random forest regression model demonstrates promising generalization ability to predict performance index in unobserved data.
Table 4.Random Forest Regression Performance Index (Training Data, Testing Data)
| Data | Symbol | R2 | EVS | MSE | RMSE | MAE | MaxError | MSLE | MedAE |
| Train | RFR | 0.97354 | 0.97356 | 6.66270 | 2.58122 | 2.05590 | 6.60817 | 0.01335 | 1.80951 |
| Test | RFR | 0.53269 | 0.58111 | 47.69938 | 6.90647 | 6.85532 | 7.83331 | 0.02290 | 6.94818 |
Table 4 provides an in-depth comparison of the performance metrics of the Random Forest Regression (RFR) model on both the training and testing datasets. On the training data, this model shows excellent performance, with an R² value of 0.97354 and an Explained Variance Score (EVS) of 0.97356, indicating that the model explains more than 97% of the variance in the performance index. The error metrics are correspondingly low, with a mean square error (MSE) of 6.66, a root mean square error (RMSE) of 2.58, and a mean absolute error (MAE) of 2.06. These values suggest highly accurate predictions during training. Furthermore, the maximum error is 6.61, and both the mean square log error (MSLE) and mean absolute error (MedAE) are low at 0.01335 and 1.81, respectively, confirming the accuracy of the model and minimal deviation from the true values. However, the performance of the model decreases significantly when applied to the test data. The R² score drops to 0.53269, showing that only about 53% of the variation in the test set is captured, indicating low predictive power in the missing data. The EVS also drops to 0.58111, and the error metrics increase significantly—the MSE increases to 47.70, the RMSE to 6.91, and the MAE to 6.86. The mean absolute error more than triples to 6.95, and although the maximum error (7.83) and MSLE (0.02290) are within acceptable limits, the overall increase in error indicates that the model may overestimate the training data. This observation is consistent with the scatter plots in Figures 5 and 6, where nearly perfect training predictions contrast with less accurate test predictions.
4. Conclusion
This comprehensive study on AI and robotics performance optimization through sensor-based predictive modeling has provided significant insights into the complex relationships between multiple input parameters and system performance outcomes. Research successfully demonstrates that sensor measurements can serve as reliable predictors of AI and robotics system performance, with varying degrees of influence and correlation strength.The analytical results reveal a clear hierarchy of sensor importance, with Sensor 1 emerging as the primary determinant of system performance, exhibiting a strong positive correlation of 0.65 with the performance index. This finding indicates that Sensor 1 captures the fundamental operational characteristics that directly affect system performance and accuracy. Sensor 2, although of secondary importance, still maintains a meaningful positive correlation of 0.57, indicating its supporting role in performance improvement. In contrast, the negligible correlation of Sensor 3 highlights that not all measured parameters contribute equally to system outcomes, emphasizing the need for selective feature prioritization in AI and robotics applications.Comparative analysis between linear regression and random forest regression models provides valuable insights into the trade-offs between model complexity and generalization ability.
Although random forest regression achieves better training performance (R² = 0.97), its significant performance drop (R² = 0.53) on test data compared to linear regression (R² = 0.65) underscores the importance of model validation and the dangers of overfitting complex ensemble methods. This finding suggests that simpler models may sometimes provide better generalization for practical applications in AI and robotics systems.This research has practical implications for system designers and engineers working in the field of artificial intelligence and robotics. Identifying key sensor parameters enables more efficient resource allocation, focusing measurement and computational efforts on the variables that have the most impact. This approach can lead to more cost-effective system designs without compromising performance quality.Furthermore, this study contributes to a broader understanding of performance optimization in intelligent automated systems. The framework developed here can be adapted and extended to various AI and robotics applications, from manufacturing automation to healthcare robotics, providing a systematic approach to performance prediction and system optimization.Future research directions should focus on expanding the dataset size to improve model robustness, exploring nonlinear relationships between sensors and performance, and exploring the integration of additional environmental variables that may affect system performance. In addition, real-time implementation studies will confirm the practical applicability of these predictive models in operational AI and robotics environments.
Reference
- Raj, Manav, and Robert Seamans. "Primer on artificial intelligence and robotics." Journal of Organization Design 8, no. 1 (2019): 11.
- Vrontis, Demetris, Michael Christofi, Vijay Pereira, ShlomoTarba, Anna Makrides, and Eleni Trichina. "Artificial intelligence, robotics, advanced technologies and human resource management: a systematic review." Artificial intelligence and international HRM (2023): 172-201.
- Mir, Umar Bashir, Swapnil Sharma, Arpan Kumar Kar, and Manmohan Prasad Gupta. "Critical success factors for integrating artificial intelligence and robotics." Digital Policy, Regulation and Governance 22, no. 4 (2020): 307-331.
- Rajan, Kanna, and Alessandro Saffiotti. "Towards a science of integrated AI and Robotics." Artificial intelligence 247 (2017): 1-9.
- Urias, Müller G., Niravkumar Patel, Changyan He, Ali Ebrahimi, Ji Woong Kim, Iulian Iordachita, and Peter L. Gehlbach. "Artificial intelligence, robotics and eye surgery: are we overfitted?." International Journal of Retina and Vitreous 5 (2019): 1-4.
- Iphofen, Ron, and MihalisKritikos. "Regulating artificial intelligence and robotics: ethics by design in a digital society." Contemporary Social Science 16, no. 2 (2021): 170-184.
- McKendrick, M., S. Yang, and G. A. McLeod. "The use of artificial intelligence and robotics in regional anaesthesia." Anaesthesia 76 (2021): 171-181.
- Stai, Bethany, Nick Heller, Sean McSweeney, Jack Rickman, Paul Blake, RanveerVasdev, Zach Edgerton et al. "Public perceptions of artificial intelligence and robotics in medicine." Journal of endourology 34, no. 10 (2020): 1041-1048.
- Börner, Katy, Olga Scrivner, Leonard E. Cross, Michael Gallant, Shutian Ma, Adam S. Martin, Lisel Record, Haici Yang, and Jonathan M. Dilger. "Mapping the co-evolution of artificial intelligence, robotics, and the internet of things over 20 years (1998-2017)." PloS one 15, no. 12 (2020): e0242984.
- Peram, S. R (2023). Advanced Network Traffic Visualization and Anomaly Detection Using PCA-MDS Integration and Histogram Gradient Boosting Regression. Journal of Artificial Intelligence and Machine Learning, 1(3), 281. https://doi.org/10.55124/jaim.v1i3.281
- Bordot, Florent. "Artificial intelligence, robots and unemployment: Evidence from OECD countries." Journal of Innovation Economics & Management 37, no. 1 (2022): 117-138.
- Chandrasekar Raja, Ramachandran. Devipriya Mani, SathiyarajChinnasamy, "Corridor Selection for Autonomous Vehicle Deployment", Computer Science, Engineering and Technology, 3(1), March 2025, 117-127.
- Belk, Russell. "Ethical issues in service robotics and artificial intelligence." The Service Industries Journal 41, no. 13-14 (2021): 860-876.
- PK Kanumarlapudi. (2023) Strategic Assessment of Data Mesh Implementation in the Pharma Sector: An Edas-Based Decision-Making Approach. SOJ Mater Sci Eng 9(3): 1-9. DOI: 10.15226/2473-3032/9/3/00183
- Golubchikov, Oleg, and Mary Thornbush. "Artificial intelligence and robotics in smart city strategies and planned smart development." Smart Cities 3, no. 4 (2020).
- Yin, Robert K., and Gwendolyn B. Moore. "The use of advanced technologies in special education: Prospects from robotics, artificial intelligence, and computer simulation." Journal of Learning Disabilities 20, no. 1 (1987): 60-63.
- Fox, Maria, Luc De Raedt, Félix Ingrand, and Malik Ghallab. "Robotics and artificial intelligence: A perspective on deliberation functions." AI communications 27, no. 1 (2014): 63-80.
- Raghavendra Sunku. (2023). AI-Powered Data Warehouse: Revolutionizing Cloud Storage Performance through Machine Learning Optimization. International Journal of Artificial intelligence and Machine Learning, 1(3), 278. https://doi.org/10.55124/jaim.v1i3.278
- Tussyadiah, Iis. "A review of research into automation in tourism: Launching the Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics in Tourism." Annals of Tourism Research 81 (2020): 102883.
- Ness, Stephanie, Nicki James Shepherd, and TeoRong Xuan. "Synergy between AI and robotics: A comprehensive integration." Asian Journal of Research in Computer Science 16, no. 4 (2023): 80-94.
- Ashrafian, Hutan. "Artificial intelligence and robot responsibilities: Innovating beyond rights." Science and engineering ethics 21 (2015): 317-326.
- Sridhar Kakulavaram. (2022). Life Insurance Customer Prediction and Sustainbility Analysis Using Machine Learning Techniques. International Journal of Intelligent Systems and Applications in Engineering, 10(3s), 390 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/7649
- Wang, Hui, Han Zhang, Zhezhi Chen, Jian Zhu, and Yue Zhang. "Influence of artificial intelligence and robotics awareness on employee creativity in the hotel industry." Frontiers in Psychology 13 (2022): 834160.
- Tulli, Sai Krishna Chaitanya. "Artificial intelligence, machine learning and deep learning in advanced robotics, a review." International Journal of ActaInformatica 3, no. 1 (2024): 35-58.
- Wong, Daniel, Ryan Zink, and Sven Koenig. "Teaching artificial intelligence and robotics via games." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 24, no. 3, pp. 1917-1918. 2010.
