Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

CHR vs. Human-Computer Interaction Design for Emerging Technologies: Two Case Studies

CHR vs. Human-Computer Interaction Design for Emerging Technologies: Two Case Studies Hindawi Advances in Human-Computer Interaction Volume 2023, Article ID 8710638, 11 pages https://doi.org/10.1155/2023/8710638 Research Article CHR vs. Human-Computer Interaction Design for Emerging Technologies: Two Case Studies 1 2 3 Sharefa Murad , Abdallah Qusef , and Muhanna Muhanna Department of Computer Science, Middle East University, Jordan Software Engineering Department, Princess Sumaya University for Technology, Jordan Computer Graphics Department, Princess Sumaya University for Technology, Jordan Correspondence should be addressed to Abdallah Qusef; a.qusef@psut.edu.jo Received 11 August 2022; Revised 7 December 2022; Accepted 9 January 2023; Published 14 February 2023 Academic Editor: Ahmad Althunibat Copyright©2023SharefaMuradetal.TisisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Recent years have seen a surge in interest in the multifaceted topic of human-computer interaction (HCI). Since the advent of the Fourth Industrial Revolution, the signifcance of human-computer interaction in the feld of safety risk management has only grown. Tere has not been a lot of focus on developing human-computer interaction for identifying potential hazards in buildings. After conducting a comprehensive literature review, we developed a study framework for the use of human-computer interaction in the identifcation of construction-related hazards (CHR-HCI). Future studies will focus on the intersection of computer vision, VR, and ergonomics. In this research, we have built a theoretical foundation for past studies’ fndings and connections and ofered concrete recommendations for the improvement of HCI in danger identifcation in the future. Moreover, we analyzed two cases studies related to the domain of CHR-HCI in terms of wearable vibration-based systems and context aware navigation. resulted in exposure, might cause harm or death [1]. Because 1. Introduction of construction’s unique challenges, the industry as a whole Te importance of efcient human-computer interaction has has a comparatively low hazard identifcation rate (66.5%) grown with the prevalence of computers. Human-computer when compared to other sectors. Individually, even among interaction (HCI) is the study of how humans and computers construction employees with more than ten years of expe- work together, specifcally how well computers are designed rience, the danger identifcation rate is below 80% [2]. In to work with humans. Te use of computers has always raised order to lower the accident incidence and guarantee the the issue of how to connect them. Humans’ means of safety of construction workers, it is crucial to efectively communicating with computers have progressed consider- recognise possible risks. However, the current state of the art ably throughout the years. While we have come a long way in in danger identifcation is monomodal and places too much the previous several decades, we still have a long way to go. weight on human intuition [3]. One of the key reasons why Every day, new technological and system designs emerge, and the worldwide number of deaths in the construction in- research into this feld has exploded. Not only has the quality dustry has not yet clearly decreased is because hazard de- of communication between humans and computers im- tection technology has evolved slowly and has failed to proved, but the human-computer interaction (HCI) disci- satisfy the demands of the construction industry’s devel- pline has also diversifed over time. Diferent areas of study opment to date. Tese days, both worker safety and the long- have paid more attention to the ideas of multimodality and term viability of the construction sector rely on the ability to adaptable user interfaces than they have to the design of accurately identify possible dangers [4]. traditional command- and action-oriented user interfaces. Terefore, the rapid speed of the Fourth Industrial In the discipline of civil engineering, “hazard” is fre- Revolution is pushing the widespread use of human- quently defned as the source of energy that, if released and computer interaction technologies in the construction 2 Advances in Human-Computer Interaction sector, which in turn is propelling developments in danger the modern day, such as data communications, numerical identifcation tools. For example, scholars like Schulte et al. and symbolic interaction, voice interaction, and intelligent interactions [11]. Tosi [6] and Jeon et al. [9] have proposed [5] are working to model, measure, and improve the efcacy of various types of interfaces between computer applications subclassifcations within the three main parts of the interface and construction workers, as well as maximise the accuracy design process: interactive design, structural design, and with which data are mapped from one modality to another. visual design [15]. For instance, “the types of interactions” As a result, it can be deduced that there is both a robust body and “how the interactions take place” might further cate- of academic literatureandsubstantialroom forgrowthin the gorise interactive design, which is concerned with people’s feld of human-computer interaction technology as it per- interactions with systems [7]. When creating an interactive tains to hazard detection in the built environment [3]. interface, it is crucial to keep in mind factors such as Here, we use the term “CHR-HCI” to refer to studies that “people’s orientation, consistency, users’ operation ability, investigate the intersection of HCI and hazard recognition in shortcuts, assistance, and feedback,” as emphasised by the built environment. While the work of the selected few Esposito et al. [16, 17]. Again, structural design may be broken down into three subcategories that focus on ana- researchers has been extensive, not nearly enough attention has been paid to establishing a broad context for these in- lysing individual requirements, the rationale for carrying out vestigations [3]. Terefore, this study aims to do the fol- the work, and the way in which the task was designed lowing: (1) review the related work presentedin the literature [18, 19]. Finally, “visual design,” which involves combining of CHR and HCI; (2) analyze two case studies related to “complexity and imagery,” aims to make consumers pleased thisfeld; (3) identify the directions for future research. with the interface [19], regardless of what other research studies hve revealed [20]. Discussion about how to best design cutting-edge IT (emerging technologies) has spread 2. Literatuire Review to the “HCI discourse” during the last decade and has In the following sections, the study will give detailed analysis regularly urged a reevaluation of current practises in in- of the reviewed works related to human-computer in- terface creation [19]. Information system experts are in- teraction design approaches by synthesising the creasingly interested in learning about HCI methods of previous works. development; therefore, the question of HCI interface standards for new technologies has become a hot topic of debate [13]. 2.1. Overview of Human-Computer Interaction Design. Human-computer interaction is the study of how to create 3. Research Methodology efcient computer systems via assessment, design, and implementation [6] [7]. Human-computer interaction 3.1. Paper Retrieval. In the frst place, theinformation- (HCI) is the most crucial step in the creation of any kind of gathering tools were located. Te databases used for the computer system since it is a crucial aspect of “man-machine literature search were Scopus, ACM Digital Library, Web of systems” [8], whose participation is not only about the work Science, and Google Scholar, and they were chosen after at hand but also about the mutual understanding that might much deliberation and comparison. result from being in the same room [9] to facilitate “creating Second, which literary genres to explore were chosen. input and output modalities of information” [10] as a means Journal articles focusing on HCI technology and risk as- of comprehending human interaction with robots. Any sessment serve as the primary literature foundation for this interface’s success depends on how well it facilitates “human investigation. Academic conferences are an essential route and computer system communication” in its entirety [11]. In for academics to discuss research results and address sci- a similar vein, Sumak et al. [12] emphasised that an efcient entifc difculties faced in this subject, so conference papers user interface is one that achieves faultless and harmonious should also be a crucial element of the literature resources interactions between humans and computer systems, as this for studying hazard recognition and HCI [3]. is the only way in which people’s mental loads can be re- In the end, the constraints to guide the literature search duced fundamentally and their “operational abilities” be were used. In order to retrieve papers, researchers have to be enhanced [6]. very specifc about what theyare looking for and what time period theyare looking at. Te terms “construction,” “haz- 2.2. Methods for HCI Design. Data and information are ard,” “recognition,” “human-computer,” and “interaction” entered into and extracted from a computer during the were found in the dictionary and looked for their near- and process known as “human and computer interaction” [13] opposite-sounding counterparts. Te following procedures by the use of a specialised user interface, whereby users give were taken to guarantee that the literature search was their instructions to the system before it examines those thorough and exhaustive [3]: synonyms and antonyms were inputs, computes, and processes them and then returns the linked using Boolean operators, and the resulting pairs were results to the users using the same interface [14]. Tere are used to query various data stores. Using the most relevant a variety of channels via which information is exchanged and keywords, abstracts, and publications from the search, we information is extracted between humans and machines in inserted the missing synonyms and near-synonyms. Advances in Human-Computer Interaction 3 3.2. Bibliometric Analysis Method. After extracting data Distribution of related papers from 2000 to 2021 from four diferent databases, the study team compared the titles to create a unique beginning literature list. Second, the names of the publications were examined and then the abstracts were verifed to make sure there were not any y = 1.9407x - 3887.9 duplicates or useless studies. As a third stage, the broad R = 0.6873 subject matter of the literature was studied to further weed 20 out the noncompliant material based on the results of processesone and two. In theend, 274 publications met allof 1995 2000 2005 2010 2015 2020 2025 the criteria and were included in the analysis [3]. -20 CiteSpace and VOSviewer were used for the bibliometric Year of Publication analysis once the sample was selected. In the study, basic Series 1 information analysis, cluster analysis, and keyword co- Linear (Series 1) occurrence analysis were used to thoroughly identify the Figure 1: Regression analysis results [3]. current research state and future development trends in this subject, as represented by abstracts and keywords. 3.3. Basic Information Analysis. Te underlying data from the 274 publications were analysed once the sample was determined. Te major purpose of this section, like the descriptive statistics in some experimental research, is to give readers the basics, such as the number of annual publications and the make-up of literary genres in this feld. Examining the distribution of diferent types of publications (journals, conferences, and reviews) over time sheds light on the development of knowledge and may provide clues as to the future of CHR-HCI. Conference paper 3.4. Number of Annual Publications. Figure 1 displays the Article trend in these types of yearly publications from the year 2000 Review to the year 2021 [3]. Most years before 2009 had a relatively low number of relevant articles published. Publications have Figure 2: Composition of literature types [3]. been on the rise since 2011, especially since 2015, increasing from nine articles in 2015 to 59 papers in 2021. Tis statistic 3.6. Keyword Co-Occurrence Network. Te scientifc demonstrates how many articles have been written on this knowledge graph shown in Figure 3 of [3] reveals the de- topic despite the infuence of the COVID-19 epidemic. velopment of CHR-HCI research by keyword co-occurrence Furthermore, a regression model was performed using analysis. Second, the cluster analysis of the term co- the least-squares approach, with the number of publica- occurrence network yielded a mean silhouette (P) value tions serving as the dependent variable and the year serving of 0.7533 and a modularity (Q) value of 0.796, both of which as the independent variable; the resultant slope is positive, are credible. Figure 3 depicts a term co-occurrence network, as illustrated by the dashed line in Figure 1. Furthermore, which may be used directly in cluster analysis. Finally, the cost index was determined by dividing the sum of all Figure 3’s research terms can be classifed into two groups, publications by the sum of only those published in the one for lower-level concepts and one for higher-level con- recent fve years (2017–2021; because 2022 has not yet cepts, depending on their frequency of occurrence. At the concluded, used those years as a proxy) (defned as top is the overarching research question, followed by a tier of 2000–2021). With a price index of 0.068, it was clear that keywords related to human-computer interaction and a tier studies in this area will only get better over time rather than of keywords related to terms concerning construction safety get stale. Terefore, it can be concluded that CHR-HCI and hazard recognition. research has garnered considerable interest and has been a rapidly expanding feld of study in recent years, as evi- denced by the increasing volume of annual publications in 3.6.1. Terms Related to Human-Computer Interaction. this area [3]. Te term “human-computer interaction” was used to de- scribe the dynamic in which human beings and computer- related machinery coexist during the execution of a pre- 3.5. Composition of Literature Types. As shown in Figure 2, articles accounted for 56 percent of all investigations con- determined automated task. Due to this, there has been ducted. Following this category were conference papers a dramatic improvement in the detection of danger. Tere (42%), followed by review articles (2%) [3]. are three main categories into which the current HCI No. of Publications 4 Advances in Human-Computer Interaction Figure 3: Keyword co-occurrence network [3]. research on hazard recognition can be sorted: key tech- performance [22]. Integration of design and construction, nologies, typical products, and product performance. increased mobility in humanoid robots, and improved load Technologies that are “key” to the development of HCI- capacity and positioning accuracy in intelligent machinery related products for use in hazard recognition can be either were all areas where HCI products applied to hazard rec- fundamental or ground-breaking. Sensor technology, po- ognition were expected to focus on in the future. sitioning and map construction, robot operatingsystems, 3D modeling, and virtual simulation are all examples of basic 3.6.2. Terms Related to Construction Safety and Hazard technologies; breakthrough technologies include computer Recognition. Tere has been a signifcant paradigm shift in vision, computer simulation, neural networks, and high- the area of CHR-HCI research over the last 21years, with the performance material manufacturing. Figure 3 [3] displays emphasis moving from accident investigation to hazard how the researchers’ use of terms such as “virtual reality,” prediction and prevention [23]. Forecasting is the key word “three-dimensional computer graphics,” “computer simu- in Figure 3 that illustrates this change [3]. As opposed to lation,” and “computer vision” demonstrates their interest in looking at accidents after they have already happened, the technology. focus of accident prevention and hazard prediction is on Construction robots for narrow scenarios and auto- making sure workers in the construction industry are aware mated construction systems for broad integration are just of and prepared for any prospective dangers [3]. Because of two examples of the types of typical HCI products that have this shift in philosophy, terms such as “risk perception” and been developed with specifc hazardrecognition functions so “risk analysis” have emerged as vital tools for helping far. Excavation robots, handling robots, and painting robots construction workers see potential dangers in high-stakes are all examples of scene-specifc robots that can recognise settings [24]. hazards and perform the same tasks repeatedly. ABCS Earthquakes, a signifcant natural hazard, have also systems and SMART systems with more comprehensive garnered scientists’ ongoing interest in the study of risk hazard recognition functions are two examples of automated prediction and hazard awareness. Researchers have begun construction systems used in integrated scenarios, and both promising new inquiries from the vantage points of have the ability to integrate multiple single-task robots [21]. earthquake design, urban planning, and cutting-edge ma- When discussing the product performance of HCI in the terials [25]. Te evolution of this feld is refected in the context of hazard recognition, we are talking about things vocabulary of the feld itself: terms such as “earthquakes,” such as product attributes, product cost, operation ef- “seismic design,” “seismology,” “architectural design,” and ciency, operation quality, and operation safety. [3]. Both “reinforced concrete” are all part of the study of earthquakes horizontal and vertical comparisons of human resources, and their efects [26]. building material consumption, machine quality, machine Alterations in management structure in this area are power, machine load, movement speed, operation accuracy, refected in the keyword co-occurrence network. In order to etc., as well as comparisons of typical HCI products and make accident prevention and hazard prediction a reality, traditional operation methods, can be used to assess revolutionary changes in organisational management and Advances in Human-Computer Interaction 5 technological methods [3]. Task assessment and quantif- safety technology are essential [27]. Due to the inextricable link between management and construction safety, experts cation, brain-computer interfaces, and experimental para- digms in engineering psychology are now at the centre of are always looking for new ways to enhance the industry’s already stellar safety record. Te evolution of this feld is this feld’s investigation [3]. refected in the rise of new concepts such as decision- making, monitoring, safety training, and risk manage- 3.7.3. Cluster 3: Computer Simulation. A computer simu- ment. After 21years of study, scholars such as Yeo et al. [28] lation, often called an “emulation,” is software designed to consider risk management, risk decision-making, engi- mimic the behaviour of a model of a system in order to learn neering structural health, and safety training in engineering more about that system [33]. Of the 251 articles found, 97 construction to be signifcant areas of inquiry. were directly connected to the search term [3]. With the goal of simulating hazards in construction scenarios through 3.7. Cluster Analysis. Cluster analysis was used to describe simulation software and external parameters, current hazard the most important developments in the feld of CHR-HCI recognition research in computer simulation focuses on [3]. Using optimum computational techniques in statistics, discrete simulation, analogous simulation, simulation based cluster analysis is a way of analysis that may be used to on probe elements, and simulation of stochastic processes or analyse text data and uncover interesting study subjects. In deterministic models [3]. Creating new code and improving this investigation, VOSviewer and CiteSpace were used for upon preexisting systems are both vital parts of this study. cluster analysis, with CiteSpace being used to fne-tune the Discrete event simulation languages such as GPSS, SIM- data obtained by VOSviewer. Log-likelihood ratio, mutual SCRIPT, GASD, CSL, and SIMULA and continuous system information, and greatest word frequency are the three most simulation languages such as DARE, ACSL, CSS, and CSSL commonly used approaches to naming modules in Cite- have been continuously optimised by a large number of Space [29, 30]. Because the names of the modules are so researchers, laying a frm groundwork for human-computer descriptive, we settled on using the highest word frequency interaction technology and fostering the growth of hazard technique to determine which ones existed. recognition [34]. Figure 4 [30] shows the results of the study and opti- mization, which led to the creation of four modules with no clear link between them: computer vision, ergonomics, 3.7.4. Cluster 4: Virtual Reality. Te purpose of virtual re- computer simulation, and virtual reality [3]. ality (VR) technology is to allow people to experience a computer-generated environment with all their senses. [35]. Te 52 articles that were found while searching for this 3.7.1. Cluster 1: Computer Vision. Out of a total of 251 keyword among the 251 results show that the introduction of articles found, 177 were directly relevant to the keyword [3]. virtual reality into the area of hazard detection has great Tis highlights the important role that computer vision plays potential for future growth [3]. Scholars are trying to op- in hazard identifcation investigations. Constant refnement timise dynamic environment modeling, real-time 3D of deep learning techniques such as convolutional neural graphics creation, stereo display and sensor technology, and networks, stacked autoencoder network models, and deep system integration technology from the standpoint of belief networks underpins recent developments in computer technological development [3]. From an application vision technology. Topics such as content-based picture standpoint, virtual reality technology is primarily developed extraction, posture assessment, multimodal data identif- for use in construction risk assessment and worker safety cation, autosomal motion, image tracking, scene re- training. Te expensive cost of manufacture and the un- construction, image recovery, and system integration are reliability of the user’s visual experience are two of virtual crucial areas of study. Tere are two main lines of inquiry in reality’s key technological drawbacks [36]. computer vision related to danger recognition [3]. As an example, Luo et al. [31] have developed models and analysed cognitive connections. 4. Case Studies and Analysis Two case studies are ofered here to highlight how HCI 3.7.2. Cluster 2: Ergonomics. Since 2015, CHR-HCI has been research may include human values throughout the process. strongly tied to ergonomics, which has progressed toward more diversity, humanization, and intelligence [3]. In order to enhance the efciency of danger identifcation, scientists 4.1. Case Study 1: Wearable Vibration-Based Computer [37]. are now using physiological and psychometric methods to Information technology is being put to good use in many investigate the rational coordination link between the facets of modern life. Machines have become more vital due structural-functional, psychological, and mechanical com- to the difculties people have in conveying and processing ponents of the human body and computers [32]. Sixty-fve of information. One of the primary goals of speech recognition the 251 papers retrieved were associated with this keyword, systems is to permit more widespread usage of computer demonstrating that the relationship between construction systems that aid people’s work in a variety of professions by hazard identifcation and ergonomics is sufcient and that allowing them to communicate with one another through a large number of researchers have carefully studied the voice [37]. 6 Advances in Human-Computer Interaction Figure 4: Cluster analysis [30]. Humans rely mostly on verbal exchanges for commu- crucial and permits people with hearing problems to travel safely. Te goal is to develop a product that people with nication [37]. Understanding and identifying the speaker, their gender, age, and emotional state are all possible [38]. hearing difculties can use on a daily basis to improve their Humans’ ability to communicate verbally begins in their lives. Tis will give them instantaneous, real-time access to minds, where a combination of motivation and neuronal additional perception and decision-making skills. activity produces audible speech. Speech is received by the Ketabdar and Polzehl’s research [40] included creating auditory system, which transforms it into neural signals that a smartphone app that would analyse sound, detect vibra- the brain can interpret [39]. tions, and display alerts in the event of a loud event. Tis Te inability to localise the source of a sound is the programme is helpful for the deaf and anyone with hearing primary challenge faced by those with hearing loss. Te impairments sinceit alerts themabout nearbyloud activities. Te mobile phone’s microphone is used by the spoken primary aim in this research [37] was to fnd a way to help the hearing-impaired identify the source of an incoming content analysis algorithm to collect data on the user’s sound and move in that direction. Te other goal was to environment, which is then analysed for any shifts in the make sure that people with hearing loss could still un- level of background noise. When changes occur or other derstand who was talking and how loud they were talking. A circumstances arise, the app alerts the user with visual or voice recognition application’s primary function is to take in vibratory-tactile cues that correspond to the altered speech speech data and generate an approximate translation. To do content. Te user will now know about the mishap [37]. so, the captured audio from the microphone must be With the study of user actions, this algorithm may be im- converted from analogue to digital, after which the char- proved to do even more tasks [40]. As part of their research, acteristics of the acoustic signal can be extracted and used to Shivakumar and Rajasenathipathi used hardware control identify critical features. techniques and a screen input application to link people who are deaf or blind to a computer so that they may use modern Two characteristics of the sound wave itself are very important. Specifcally, we are talking about amplitude and computer technology for communication purposes, such as frequency [37]. Te treble and bass qualities of a sound are vibrating gloves [41]. determined by the frequency, while the intensity and energy Te wearable solution underwent preliminary testing of a sound are established by the amplitude. Analysis and and deployment in the feld. Incoming data were estimated classifcation of acoustic signals are useful for sound rec- in real time, and the user is updated instantly through vi- ognition systems. Real-time tests of the wearable device have brations. As the system reacts and reroutes the user, our also been conducted, and the results have been compared. wearable device predicts the direction once more. Tis Te device, worn by the user as shown in Figure 5 [37], can method was used to determine which of the previously described methods was the most efective, and then that detect the presence of a deaf person by sensing vibrations transmitted through the user’s clothing in real time. method was put into use. Subjects were played recordings of voices coming from a variety of locations and asked to Te primary goal of this research [37] was to determine whether individuals with hearing loss may detect sounds identify their source. Te success of our wearable system was such as brake or horn noises coming from behind them. evaluated by comparing these numbers to those obtained in People who have trouble hearing may experience distress the real world [37]. when they hear noises approaching from behind. Addi- Te second step involves hooking up the system to tionally, the ability to hear the sounds of brakes and horns is a computer and bringing the voices and their instructions Advances in Human-Computer Interaction 7 Figure 5: Testing a wearable device on the user in real time [37]. into the digital realm. Each time the data were gathered from incoming sound is coming 20milliseconds after the vibra- four separate microphones, they were stored in a matrix, and tion has been given, that is, the listener will be able to recognise the sound coming in within 20ms [37]. this process continued until a sizable data set had been amassed. Preprocessing, feature extraction, and classifca- A total of eight directions were used during fve days of testing with four deaf people and two individuals with mild tion were all successful with the data that were generated. Results were compared to the live application and discussed hearing loss, with fndings compared to those of normal in context [37]. participants. Te efectiveness was measured by playing Four microphone ports were integrated into the fnal recordings made from the left, right, front, and back and wearable system (Figure 6 [37]). To ensure clear audio in all identifying the locations where these directions were cardinal directions, four microphones were used. Initial tests intersected. In this research, we analysed the data from four- were conducted with only three microphones, but it was and eight-directional studies and conducted further tests in determined that four were required due to the system’s low both controlled and natural settings [37]. success rates and the fact that there are four main directions. Actual human subjects were employed as sound gen- erators in these real-time studies [37]. An outdoor stroll With the help of the HCI, they were positioned to the right, left, front, and back of the user (Figure 6 [37]). Using four would be interrupted by a call from behind, with the user’s ability to hear the voice being measured. Te computer microphones as opposed to three improved accuracy in experiments, the system’s design called for two vibration system used a loudspeaker to play the audio. In this ex- motor outlet units, one on each fngertip, to indicate the periment, the participant’s left and right fngertips were direction of sound via vibration frequencies. Te high attached to vibration motors, and microphones were placed concentration of nerves in the fngertips is the primary factor on their right, left, behind, and in front of them. For in- in this preference.Furthermore, vibration motors positioned stance, the left fngertip’s vibration motor would activate in on the fngers are more user-friendly and cause less response to a sound coming from the left. Te right and left vibration motors would be used for forward and reverse disruption [37]. Te designed system has four LED outlets, and when movement, respectively. In the forward movement, three quick vibrations from the right-to-left motors would be a sound is detected, the LED of the outlet facing the direction of vibration is illuminated.Tecombination of vibration and produced. Te vibration motors would cycle through three LED lights enhances the user’s ability to identify the correct times of vibration in the back, right, and left directions. Te direction. LEDs were used to provide a visible alert. typical time taken for the user to discern the product’s Meanwhile, the possibility of using four distinct LED lights direction is 70milliseconds. Tis research helped classify for the four cardinal directions is being studied. Tere are individuals based on how loud or quiet they sound, so those LEDs for the user to glance at if they are confused by the with hearing impairments could pay attention. When vibrations. In this investigation, vibration serves to stimulate someone was making a loud noise nearby, for instance, those the sensation of touch in those who are deaf or hard of with hearing impairments might still comprehend what was going on and behave accordingly. hearing. Hearing-impaired people will have a better chance of comprehending and feeling at ease if they can commu- nicate with others via touch [37]. 4.2. Case Study 2: Context-Aware Navigation System [42]. A 32-bit MCU based on the ARM architecture and fash In mobile navigation contexts, context awareness is a fas- storage were included in the creation [37]. It has a maximum cinating issue due to the great degree of application-specifc frequency of 72MHz, a 3.6V application supply, seven change. Not just during development but also in real time timers, two ADCs, and nine communications. Te wearable when the device is being used, navigation services take the gadget that we created ran on rechargeable batteries. On the user’s current circumstances into account. A user’s behav- batteries, about 10hours of run time are possible. Vibration iour and the device’s location are two examples of allows people to detect the direction from which an 8 Advances in Human-Computer Interaction Figure 6: Te developed wearable system [37]. circumstances that might infuence the services that a mobile study was performed using a Samsung Galaxy Note 1 navigation app ofers. Tis article [42] addresses the prob- smartphone. Tis proposed context-aware model for navi- lems of context-aware systems, which include acquiring gation services uses a client-server architecture for its soft- context, interpreting context, and adapting applications to ware.Tisarchitectureallows fortheseparationofapplication logicbetweentheuser’slocalAndroiddeviceandaserver-side context. Te work proposes a method for strengthening the precision and dependability of context-aware navigation resource with access to more extensive data stores and processing capabilities. Examples include sending the average systems via the use of inexpensive sensors in a multilevel fusion strategy. Te experiments show that smartphones value of a window of recorded accelerometer data from a local may be used for outdoor navigation with the help of context- Android device to a web server for comparison against aware personal navigation systems (PNS) [42]. a database of context patterns. Wi-Fi allows for instantaneous Applications that are “context-aware” take external data synchronisation with a server. An app is built to snag factors, such as the user’s actions, into account when making information from a mobile device and transmit it to a server judgments about the user and/or the environment. While [42]. Tis software creates timestamped data that can be used many approaches have been explored for automatic context in real time. Te main software and the user’s data are stored and environment recognition for context-aware applications on servers in a remote location, and the end users access the applications through a lightweight mobile application. Au- (such as healthcare, sports, and social networking), there is still room for improvement. Te study presented in [42], for tomatically or at the user’s urging, all relevant sensor data for detection were preprocessed and sent to the server. Te next example, is one of the frsts to apply user activity context to PNS, and more specifcally, vision-aided navigation [43]. For step involves sending the results of the context detection and the purpose of recognising and using context in PNS ap- navigationsolutionbacktothemobileuser.Twomenandtwo plications, a new hybrid paradigm was introduced. When women, ranginginagefrom26 to40,participatedinthe study using a navigation app, the user’s current activity (such as to provide data on their physical activities [42]. Testing data walking or driving) and the device’s current location and were collected with the smartphone in a variety of positions, orientation provide valuable context. such as in a purse, a jacket pocket, on a belt, held close to the ear while talking, and at the user’s side while the arm was In the feld of pervasive computation, Caetano suggested using a hybrid mythology to combine the best features of swung. Te only restriction on how the smartphone should be worn is where on the body it should be kept. After two data-driven and knowledge-driven approaches [44]. Arato et al. proposed a knowledge-driven hybrid method for minutes, data from each activity with a unique device placement mode were saved to the server’s database (DB). continuous and real-time activity recognition in smart homes via the use of multisensor data [45]. In this research, Subjects were asked to mark the beginning and end times of ontology-based semantic reasoning and classifcation are their primary activities in order to construct the reference used for activity recognition, but domain knowledge is data [42]. heavily leveraged throughout the entire process [42]. Tose sensors that correlate most strongly with the An activity recognition module is created to determine activity classes are the most optimal for activity recognition. which sensors and features best aid in the development of In order to detect motion, accelerometer sensors have be- come increasingly popular. Te gyroscope can record the a reliable context detection algorithm. With the help of the activity recognition module and a battery of experiments, it user’s movements and the device’s new orientation. When trying to diferentiate between groups of on-body device waspossible to gaugehow wellit performed across a varietyof user motions and modes [42]. Te data collection for this placements and identify the device’s orientation in each Advances in Human-Computer Interaction 9 Figure 7: Calibration process [42]. placement, orientation determination is a crucial feature or risks involved and theory pertaining to the actual act [42]. In addition to assisting with orientation and heading of recognising or identifying the hazards. Teoretical determination, magnetometer sensors also provide absolute guidance for the implementation of HCI technologies heading information. Device orientation can also be esti- may be found in felds including risk psychology, er- gonomics, human factors engineering, behavioural mated using the orientation software sensor (or soft sensor) made available by the Android API. Te orientation angles psychology, and sociology. Academics have paid a lot of are generated by fusing three signals from an accelerometer, attention to engineering ethics because of its supervi- a gyroscope, and a magnetometer in this sensor. Te values sory role in scientifc experiments, and this is because of of these angles characterise the relationship between the the importance of engineering as science and tech- device’s coordinate system and the regional navigational nology progress. As such, engineering ethics should be reference frame. Te orientation soft-sensor’s output can taken into account as a fundamental compass for stand on its own as a sensor or be used to transform data identifying potential dangers [47, 48]. from one coordinate system (the device’s) to another (the In terms of real-world implementation, hazard rec- reference navigation system). Multiple sensors’ context ognition should fnd most use in computer simulation, recognition outputs have been analysed as a whole [42]. computer vision, VR/AR, and robotics [49]. Te three Calibration and noise reduction are applied to the raw issues we have highlighted are where we think re- data captured by sensors, as depicted in Figure 7 [42]. Signal searchers should focus in the future when studying processing algorithms are then applied to the data inorder to hazard recognition. First, researchers want to fnd ef- extract useful features. Although there is a vast pool of fcient ways to process multimodal data in hazard features from which to choose, only a few should be recognition experiments, and second, they want to use implemented for reliable, real-time context recognition [42]. these data to create intuitive devices for hazard rec- Afterwards, the feature space can be classifed using clas- ognition. Te end goal is to create a user-friendly sifcation methods. Tervo et al. [46] noted that there is a wide platform for managing safety measures that uses range of feature extraction and classifcation methods and multimodal data. Accordingly, these three areas of that the best method to use is often context-specifc. study have seen some practical application and also point in clear future directions. 5. Conclusion Data Availability Tis paper proposes a framework to categorise the CHR-HCI feld into three levels, acknowledging that Te data that support the fndings of this study are available human-computer interaction is an emerging in- from the corresponding author upon request. terdisciplinary feld encompassing numerous disciplines and that hazard recognition also requires complex theoretical Conflicts of Interest knowledge and practical techniques. Te papers reviewed several related work in the feld of CHR-HCI and analyized Te authors declare that they have no conficts of interest. two related case studies. From a research perspective, hazard identifcation is Acknowledgments interested in the construction industry’s practise of Tis work was partially funded by Middle East University. fnding, perceiving, and recognising dangers and their infuencing variables for the sake of risk assessment, accident prevention, foresight, prediction, and in- References telligent monitoring. Te primary improvement in [1] A. Albert, M. R. Hallowell, M. Skaggs, and B. Kleiner, engineering safety driving philosophy during the last “Empirical measurement and improvement of hazard rec- 21years has been the shift from postaccident analysis to ognition skill,” Safety Science, vol. 93, pp. 1–8, 2017. preaccident prediction and prevention, made possible [2] B. J. Ladewski and A. J. Al-Bayati, “Quality and safety by advancements in human-computer interface tech- management practices: the theory of quality management nology. As a result, this is one reason why we are approach,” Journal of Safety Research, vol. 69, pp. 193–200, pushing for the widespread use of HCI methods. Teoretically speaking, there are two basic components [3] J. Wang, R. Cheng, M. Liu, and P. C. Liao, “Research trends of to hazard recognition: theory pertaining to the hazards human-computer interaction studies in construction hazard 10 Advances in Human-Computer Interaction recognition: a bibliometric review,” Sensors, vol. 21, no. 18, [20] C. M. Gray, “It’s more of a mindset than a method, UX p. 6172, 2021. practitioners’ conception of design methods,” in Proceedings [4] N. Aniekwu, “Accidents and safety violations in the Nigerian of the 2016 CHI Conference on Human Factors in Computing construction industry,” Journal of Science and Technology, Systems, pp. 4044–4055, San Jose, CA,USA, May 2016. vol. 27, pp. 81–89, 2007. [21] Y. Ikeda, “Te automated building construction system for [5] A. Schulte, D. Donath, D. S. Lange, and R. S. Gutzwiller, “A high-rise steel structure buildings,” in Proceedings of the heterarchical urgency-based design pattern for human au- Council on Tall Buildings and Urban Habitat (CTBUH), Seoul, tomation interaction,” in Proceedings of the 15th International Korea, October 2004. Conference on Engineering Psychology and Cognitive Ergo- [22] T. Wakisaka, N. Furuya, Y. Inoue, and T. Shiokawa, “Au- nomics, Las Vegas, NV, USA, July 2018. tomated construction system for high-rise reinforced concrete [6] F. Tosi, Design for Ergonomics, Springer, Manhattan, NY, buildings,” Automation in Construction, vol. 9, no. 3, USA, 2020. pp. 229–250, 2000. [7] Y. C. Hsu and I. Nourbakhsh, “When human-computer in- [23] D. Borys, “Te role of safe work method statements in the teraction meets community citizen science,” Communications Australian construction industry,” Safety Science, vol. 50, of the ACM, vol. 63, no. 2, pp. 31–34, 2020. no. 2, pp. 210–220, 2012. [8] D. A. Kocsis, “A conceptual foundation of design and [24] M. Zhang, T. Cao, and X. Zhao, “Applying sensor-based implementation research in accounting information systems,” technology to improve construction safety management,” International Journal of Accounting Information Systems, Sensors, vol. 17, no. 8, p. 1841, 2017. vol. 34, Article ID 100420, 2019. [25] S. Farahani, A. Tahershamsi, and B. Behnam, “Earthquake and [9] M. Jeon, R. Fiebrink, E. A. Edmonds, and D. Herath, “From post-earthquake vulnerability assessment of urban gas pipe- rituals to magic: interactive art and HCI of the past, present, lines network,” Natural Hazards, vol. 101, no. 2, pp. 327–347, and future,” International Journal of Human-Computer 2020. Studies, vol. 131, pp. 108–119, 2019. [26] M. D. Joyner and M. Sasani, “Building performance for [10] F. Gutierrez, ´ N. N. Htun, F. Schlenz, A. Kasimati, and earthquake resilience,” Engineering Structures, vol. 210, Ar- K. Verbert, “A review of visualisations in agricultural decision ticle ID 110371, 2020. support systems: an HCI perspective,” Computers and Elec- [27] G. Fu, X. C. Xie, Q. S. Jia, W. Q. Tong, and Y. Ge, “Accidents tronics in Agriculture, vol. 163, Article ID 104844, 2019. analysis and prevention of coal and gas outburst: un- [11] V. Righi, S. Sayago, and J. Blat, “When we talk about older derstanding human errors in accidents,” Process Safety and people in HCI, who are we talking about? In the design of Environmental Protection, vol. 134, pp. 1–23, 2020. technologies for a growing and ageing population, there is [28] C. J. Yeo, J. H. Yu, and Y. Kang, “Quantifying the efectiveness a “turn to community,” International Journal of Human- of IoT technologies for accident prevention,” Journal of Computer Studies, vol. 108, pp. 15–31, 2017. Management in Engineering, vol.36, no.5, Article ID4020054, [12] B. Sumak, M. Spindler, M. Debeljak, M. Hericko, ˇ and 2020. M. Puˇsnik, “An empirical evaluationof a hands-free computer [29] L. Waltman, N. J. van Eck, and E. C. M. Noyons, “A unifed interaction for users with motor disabilities,” Journal of approach to mapping and clustering of bibliometric net- Biomedical Informatics, vol. 96, Article ID 103249, 2019. works,” Journal of Informetrics, vol. 4, pp. 629–635, 2010. [13] X. Mao, K. Li, Z. Zhang, and J. Liang, “Te design and [30] T. Ganbat, H. Y. Chong, P. C. Liao, Y. D. Wu, and X. B. Zhao, implementation of a new smart home control system based on “A bibliometric review on risk management and building the internet of things,” in Proceedings of the 2017 International information modeling for international construction,” Ad- Smart Cities Conference (ISC2), pp. 1–5, IEEE, Wuxi, China, vances in Civil Engineering, vol. 2018, Article ID 8351679, September 2017. 13 pages, 2018. [14] S. S. Rautaray and A. Agrawal, “Vision-based hand gesture [31] H. Luo, M. Wang, P. K. Y. Wong, J. Tang, and J. C. P. Cheng, recognition for human-computer interaction: a survey,” Ar- “Construction machine pose prediction considering historical tifcial Intelligence Review, vol. 43, no. 1, pp. 1–54, 2015. motions and activity attributes using gated recurrent unit [15] A. Pimenta, D. Carneiro, J. Neves, and P. Novais, “A neural (GRU),” Automation in Construction, vol. 121, Article ID network to classify fatigue from human–computer in- 103444, 2021. teraction,” Neurocomputing, vol. 172, pp. 413–426, 2016. [32] H. Zhang, X. Yan, and H. Li, “Ergonomic posture recognition [16] A. Esposito, A. M. Esposito, and C. Vogel, “Needs and using 3D view-invariant features from single ordinary cam- challenges in human computer interaction for processing era,” Automation in Construction, vol. 94, pp. 1–10, 2018. social emotional information,” Pattern Recognition Letters, [33] K. Kim, J. Chen, and Y. K. Cho, “Evaluation of machine vol. 66, pp. 41–51, 2015. learning algorithms for worker’s motion recognition using [17] K. Shilton, “Values and ethics in human-computer in- motion sensors,” Computing in Civil Engineering 2019: Data, teraction,” Foundations and Trends in Human–Computer Sensing, and Analytics, pp. 51–58, American Society of Civil Engineers, Reston, VA, USA, 2019. Interaction, vol. 12, no. 2, pp. 107–171, 2018. [18] A. W. Eide, J. B. Pickering, T. Yasseri et al., “Human-machine [34] A. Asadzadeh, M. Arashpour, H. Li, T. Ngo, A. Bab-Hadia- networks: toward a typology and profling framework,” Hu- shar, and A. Rashidi, “Sensor-based safety management,” man-Computer Interaction. Teory, Design, Development and Automation in Construction, vol.113, Article ID 103128, 2020. Practice, pp. 11–22, Springer, Manhattan, NY, USA, 2016. [35] Z. Zhou, Y. M. Goh, and Q. Li, “Overview and analysis of [19] G. Jacucci, A. Spagnolli, J. Freeman, and L. Gamberini, safety management studies in the construction industry,” “Symbiotic interaction: a critical defnition and comparison to Safety Science, vol. 72, pp. 337–350, 2015. other human-computer paradigms,” in Symbiotic Interaction. [36] Z. Hu, J. Zhang, and X. Zhang, “Construction collision de- Lecture Notes in Computer Science, 8820, G. Jacucci, tection for site entities based on 4-D space-time model,” Journal of Tsinghua University Science and Technology, vol. 50, L. Gamberini, J. Freeman, and A. Spagnolli, Eds., Springer Cham, Manhattan, NY, USA, 2015. no. 6, pp. 820–825, 2010. Advances in Human-Computer Interaction 11 [37] M. Yagano ˘ glu ˘ and C. Kose, ¨ “Wearable vibration based computer interaction and communication system for deaf,” Applied Sciences, vol. 7, no. 12, p. 1296, 2017. [38] A. Milton and S. Tamil Selvi, “Class-specifc multiple classi- fers scheme to recognize emotions from speech signals,” Computer Speech & Language, vol. 28, no. 3, pp. 727–742, [39] B. Schuller, S. Steidl, A. Batliner et al., “Paralinguistics in speech and language—state-of-the-art and the challenge,” Computer Speech & Language, vol. 27, no. 1, pp. 4–39, 2013. [40] H. Ketabdar and T. Polzehl, “Tactile and visual alerts for deaf people by mobile phones ACM,” in Proceedings of the 11th international ACM SIGACCESS Conference on Computers and Accessibility, pp. 25–28, Pittsburgh, PA, USA, October 2009. [41] B. L. Shivakumar and M. Rajasenathipathi, “A new approach for hardware control procedure used in braille glove vibration system for disabled persons,” Research Journal of Applied Sciences, Engineering and Technology, vol. 7, no. 9, pp. 1863–1871, 2014. [42] S. Saeedi, A. Moussa, and N. El-Sheimy, “Context-aware personal navigation using embedded sensor fusion in smartphones,” Sensors, vol. 14, no. 4, pp. 5742–5767, 2014. [43] U. Gollner, T. Bieling, and G. Joost, “Mobile Lorm Glove: introducing a communication device for deaf-blind people,” in Proceedings of the 6th International Conference on Tangible, Embedded, and Embodied Interaction, pp. 127–130, ACM, Kingston, ON, Canada, February 2012. [44] G. Caetano and V. Jousmaki, “Evidence of vibrotactile input to human auditory cortex,” NeuroImage, vol. 29, no. 1, pp. 15–28, 2006. [45] A. Arato, N. Markus, and Z. Juhasz, “Teaching Morse lan- guage to a deaf-blind person for reading and writing SMS on an ordinary vibrating smartphone,” in Computers Helping People with Special Needs, vol. 14, pp. 393–396, Springer International Publishing, Manhattan, NY, USA, 2014. [46] S. Tervo, J. Patynen, ¨ N. Kaplanis, e. Al, S. Bech, and T. Lokki, “Spatial analysis and synthesis of car audio system and car cabin acoustics with a compact microphone array,” Journal of the Audio Engineering Society, vol. 63, no. 11, pp. 014–925, [47] H. Li, G. Chan, J. K. W. Wong, and M. Skitmore, “Real-time locating systems applications in construction,” Automation in Construction, vol. 63, pp. 37–47, 2016. [48] A. Montaser and O. Moselhi, “RFID indoor location iden- tifcation for construction projects,” Automation in Con- struction, vol. 39, pp. 167–179, 2014. [49] R. Maalek and F. Sadeghpour, “Accuracy assessment of Ultra- Wide Band technology in tracking static resources in indoor construction scenarios,” Automation in Construction, vol. 30, pp. 170–183, 2013. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Human-Computer Interaction Hindawi Publishing Corporation

CHR vs. Human-Computer Interaction Design for Emerging Technologies: Two Case Studies

Loading next page...
 
/lp/hindawi-publishing-corporation/chr-vs-human-computer-interaction-design-for-emerging-technologies-two-MSCrTqaBUF

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
ISSN
1687-5893
eISSN
1687-5907
DOI
10.1155/2023/8710638
Publisher site
See Article on Publisher Site

Abstract

Hindawi Advances in Human-Computer Interaction Volume 2023, Article ID 8710638, 11 pages https://doi.org/10.1155/2023/8710638 Research Article CHR vs. Human-Computer Interaction Design for Emerging Technologies: Two Case Studies 1 2 3 Sharefa Murad , Abdallah Qusef , and Muhanna Muhanna Department of Computer Science, Middle East University, Jordan Software Engineering Department, Princess Sumaya University for Technology, Jordan Computer Graphics Department, Princess Sumaya University for Technology, Jordan Correspondence should be addressed to Abdallah Qusef; a.qusef@psut.edu.jo Received 11 August 2022; Revised 7 December 2022; Accepted 9 January 2023; Published 14 February 2023 Academic Editor: Ahmad Althunibat Copyright©2023SharefaMuradetal.TisisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Recent years have seen a surge in interest in the multifaceted topic of human-computer interaction (HCI). Since the advent of the Fourth Industrial Revolution, the signifcance of human-computer interaction in the feld of safety risk management has only grown. Tere has not been a lot of focus on developing human-computer interaction for identifying potential hazards in buildings. After conducting a comprehensive literature review, we developed a study framework for the use of human-computer interaction in the identifcation of construction-related hazards (CHR-HCI). Future studies will focus on the intersection of computer vision, VR, and ergonomics. In this research, we have built a theoretical foundation for past studies’ fndings and connections and ofered concrete recommendations for the improvement of HCI in danger identifcation in the future. Moreover, we analyzed two cases studies related to the domain of CHR-HCI in terms of wearable vibration-based systems and context aware navigation. resulted in exposure, might cause harm or death [1]. Because 1. Introduction of construction’s unique challenges, the industry as a whole Te importance of efcient human-computer interaction has has a comparatively low hazard identifcation rate (66.5%) grown with the prevalence of computers. Human-computer when compared to other sectors. Individually, even among interaction (HCI) is the study of how humans and computers construction employees with more than ten years of expe- work together, specifcally how well computers are designed rience, the danger identifcation rate is below 80% [2]. In to work with humans. Te use of computers has always raised order to lower the accident incidence and guarantee the the issue of how to connect them. Humans’ means of safety of construction workers, it is crucial to efectively communicating with computers have progressed consider- recognise possible risks. However, the current state of the art ably throughout the years. While we have come a long way in in danger identifcation is monomodal and places too much the previous several decades, we still have a long way to go. weight on human intuition [3]. One of the key reasons why Every day, new technological and system designs emerge, and the worldwide number of deaths in the construction in- research into this feld has exploded. Not only has the quality dustry has not yet clearly decreased is because hazard de- of communication between humans and computers im- tection technology has evolved slowly and has failed to proved, but the human-computer interaction (HCI) disci- satisfy the demands of the construction industry’s devel- pline has also diversifed over time. Diferent areas of study opment to date. Tese days, both worker safety and the long- have paid more attention to the ideas of multimodality and term viability of the construction sector rely on the ability to adaptable user interfaces than they have to the design of accurately identify possible dangers [4]. traditional command- and action-oriented user interfaces. Terefore, the rapid speed of the Fourth Industrial In the discipline of civil engineering, “hazard” is fre- Revolution is pushing the widespread use of human- quently defned as the source of energy that, if released and computer interaction technologies in the construction 2 Advances in Human-Computer Interaction sector, which in turn is propelling developments in danger the modern day, such as data communications, numerical identifcation tools. For example, scholars like Schulte et al. and symbolic interaction, voice interaction, and intelligent interactions [11]. Tosi [6] and Jeon et al. [9] have proposed [5] are working to model, measure, and improve the efcacy of various types of interfaces between computer applications subclassifcations within the three main parts of the interface and construction workers, as well as maximise the accuracy design process: interactive design, structural design, and with which data are mapped from one modality to another. visual design [15]. For instance, “the types of interactions” As a result, it can be deduced that there is both a robust body and “how the interactions take place” might further cate- of academic literatureandsubstantialroom forgrowthin the gorise interactive design, which is concerned with people’s feld of human-computer interaction technology as it per- interactions with systems [7]. When creating an interactive tains to hazard detection in the built environment [3]. interface, it is crucial to keep in mind factors such as Here, we use the term “CHR-HCI” to refer to studies that “people’s orientation, consistency, users’ operation ability, investigate the intersection of HCI and hazard recognition in shortcuts, assistance, and feedback,” as emphasised by the built environment. While the work of the selected few Esposito et al. [16, 17]. Again, structural design may be broken down into three subcategories that focus on ana- researchers has been extensive, not nearly enough attention has been paid to establishing a broad context for these in- lysing individual requirements, the rationale for carrying out vestigations [3]. Terefore, this study aims to do the fol- the work, and the way in which the task was designed lowing: (1) review the related work presentedin the literature [18, 19]. Finally, “visual design,” which involves combining of CHR and HCI; (2) analyze two case studies related to “complexity and imagery,” aims to make consumers pleased thisfeld; (3) identify the directions for future research. with the interface [19], regardless of what other research studies hve revealed [20]. Discussion about how to best design cutting-edge IT (emerging technologies) has spread 2. Literatuire Review to the “HCI discourse” during the last decade and has In the following sections, the study will give detailed analysis regularly urged a reevaluation of current practises in in- of the reviewed works related to human-computer in- terface creation [19]. Information system experts are in- teraction design approaches by synthesising the creasingly interested in learning about HCI methods of previous works. development; therefore, the question of HCI interface standards for new technologies has become a hot topic of debate [13]. 2.1. Overview of Human-Computer Interaction Design. Human-computer interaction is the study of how to create 3. Research Methodology efcient computer systems via assessment, design, and implementation [6] [7]. Human-computer interaction 3.1. Paper Retrieval. In the frst place, theinformation- (HCI) is the most crucial step in the creation of any kind of gathering tools were located. Te databases used for the computer system since it is a crucial aspect of “man-machine literature search were Scopus, ACM Digital Library, Web of systems” [8], whose participation is not only about the work Science, and Google Scholar, and they were chosen after at hand but also about the mutual understanding that might much deliberation and comparison. result from being in the same room [9] to facilitate “creating Second, which literary genres to explore were chosen. input and output modalities of information” [10] as a means Journal articles focusing on HCI technology and risk as- of comprehending human interaction with robots. Any sessment serve as the primary literature foundation for this interface’s success depends on how well it facilitates “human investigation. Academic conferences are an essential route and computer system communication” in its entirety [11]. In for academics to discuss research results and address sci- a similar vein, Sumak et al. [12] emphasised that an efcient entifc difculties faced in this subject, so conference papers user interface is one that achieves faultless and harmonious should also be a crucial element of the literature resources interactions between humans and computer systems, as this for studying hazard recognition and HCI [3]. is the only way in which people’s mental loads can be re- In the end, the constraints to guide the literature search duced fundamentally and their “operational abilities” be were used. In order to retrieve papers, researchers have to be enhanced [6]. very specifc about what theyare looking for and what time period theyare looking at. Te terms “construction,” “haz- 2.2. Methods for HCI Design. Data and information are ard,” “recognition,” “human-computer,” and “interaction” entered into and extracted from a computer during the were found in the dictionary and looked for their near- and process known as “human and computer interaction” [13] opposite-sounding counterparts. Te following procedures by the use of a specialised user interface, whereby users give were taken to guarantee that the literature search was their instructions to the system before it examines those thorough and exhaustive [3]: synonyms and antonyms were inputs, computes, and processes them and then returns the linked using Boolean operators, and the resulting pairs were results to the users using the same interface [14]. Tere are used to query various data stores. Using the most relevant a variety of channels via which information is exchanged and keywords, abstracts, and publications from the search, we information is extracted between humans and machines in inserted the missing synonyms and near-synonyms. Advances in Human-Computer Interaction 3 3.2. Bibliometric Analysis Method. After extracting data Distribution of related papers from 2000 to 2021 from four diferent databases, the study team compared the titles to create a unique beginning literature list. Second, the names of the publications were examined and then the abstracts were verifed to make sure there were not any y = 1.9407x - 3887.9 duplicates or useless studies. As a third stage, the broad R = 0.6873 subject matter of the literature was studied to further weed 20 out the noncompliant material based on the results of processesone and two. In theend, 274 publications met allof 1995 2000 2005 2010 2015 2020 2025 the criteria and were included in the analysis [3]. -20 CiteSpace and VOSviewer were used for the bibliometric Year of Publication analysis once the sample was selected. In the study, basic Series 1 information analysis, cluster analysis, and keyword co- Linear (Series 1) occurrence analysis were used to thoroughly identify the Figure 1: Regression analysis results [3]. current research state and future development trends in this subject, as represented by abstracts and keywords. 3.3. Basic Information Analysis. Te underlying data from the 274 publications were analysed once the sample was determined. Te major purpose of this section, like the descriptive statistics in some experimental research, is to give readers the basics, such as the number of annual publications and the make-up of literary genres in this feld. Examining the distribution of diferent types of publications (journals, conferences, and reviews) over time sheds light on the development of knowledge and may provide clues as to the future of CHR-HCI. Conference paper 3.4. Number of Annual Publications. Figure 1 displays the Article trend in these types of yearly publications from the year 2000 Review to the year 2021 [3]. Most years before 2009 had a relatively low number of relevant articles published. Publications have Figure 2: Composition of literature types [3]. been on the rise since 2011, especially since 2015, increasing from nine articles in 2015 to 59 papers in 2021. Tis statistic 3.6. Keyword Co-Occurrence Network. Te scientifc demonstrates how many articles have been written on this knowledge graph shown in Figure 3 of [3] reveals the de- topic despite the infuence of the COVID-19 epidemic. velopment of CHR-HCI research by keyword co-occurrence Furthermore, a regression model was performed using analysis. Second, the cluster analysis of the term co- the least-squares approach, with the number of publica- occurrence network yielded a mean silhouette (P) value tions serving as the dependent variable and the year serving of 0.7533 and a modularity (Q) value of 0.796, both of which as the independent variable; the resultant slope is positive, are credible. Figure 3 depicts a term co-occurrence network, as illustrated by the dashed line in Figure 1. Furthermore, which may be used directly in cluster analysis. Finally, the cost index was determined by dividing the sum of all Figure 3’s research terms can be classifed into two groups, publications by the sum of only those published in the one for lower-level concepts and one for higher-level con- recent fve years (2017–2021; because 2022 has not yet cepts, depending on their frequency of occurrence. At the concluded, used those years as a proxy) (defned as top is the overarching research question, followed by a tier of 2000–2021). With a price index of 0.068, it was clear that keywords related to human-computer interaction and a tier studies in this area will only get better over time rather than of keywords related to terms concerning construction safety get stale. Terefore, it can be concluded that CHR-HCI and hazard recognition. research has garnered considerable interest and has been a rapidly expanding feld of study in recent years, as evi- denced by the increasing volume of annual publications in 3.6.1. Terms Related to Human-Computer Interaction. this area [3]. Te term “human-computer interaction” was used to de- scribe the dynamic in which human beings and computer- related machinery coexist during the execution of a pre- 3.5. Composition of Literature Types. As shown in Figure 2, articles accounted for 56 percent of all investigations con- determined automated task. Due to this, there has been ducted. Following this category were conference papers a dramatic improvement in the detection of danger. Tere (42%), followed by review articles (2%) [3]. are three main categories into which the current HCI No. of Publications 4 Advances in Human-Computer Interaction Figure 3: Keyword co-occurrence network [3]. research on hazard recognition can be sorted: key tech- performance [22]. Integration of design and construction, nologies, typical products, and product performance. increased mobility in humanoid robots, and improved load Technologies that are “key” to the development of HCI- capacity and positioning accuracy in intelligent machinery related products for use in hazard recognition can be either were all areas where HCI products applied to hazard rec- fundamental or ground-breaking. Sensor technology, po- ognition were expected to focus on in the future. sitioning and map construction, robot operatingsystems, 3D modeling, and virtual simulation are all examples of basic 3.6.2. Terms Related to Construction Safety and Hazard technologies; breakthrough technologies include computer Recognition. Tere has been a signifcant paradigm shift in vision, computer simulation, neural networks, and high- the area of CHR-HCI research over the last 21years, with the performance material manufacturing. Figure 3 [3] displays emphasis moving from accident investigation to hazard how the researchers’ use of terms such as “virtual reality,” prediction and prevention [23]. Forecasting is the key word “three-dimensional computer graphics,” “computer simu- in Figure 3 that illustrates this change [3]. As opposed to lation,” and “computer vision” demonstrates their interest in looking at accidents after they have already happened, the technology. focus of accident prevention and hazard prediction is on Construction robots for narrow scenarios and auto- making sure workers in the construction industry are aware mated construction systems for broad integration are just of and prepared for any prospective dangers [3]. Because of two examples of the types of typical HCI products that have this shift in philosophy, terms such as “risk perception” and been developed with specifc hazardrecognition functions so “risk analysis” have emerged as vital tools for helping far. Excavation robots, handling robots, and painting robots construction workers see potential dangers in high-stakes are all examples of scene-specifc robots that can recognise settings [24]. hazards and perform the same tasks repeatedly. ABCS Earthquakes, a signifcant natural hazard, have also systems and SMART systems with more comprehensive garnered scientists’ ongoing interest in the study of risk hazard recognition functions are two examples of automated prediction and hazard awareness. Researchers have begun construction systems used in integrated scenarios, and both promising new inquiries from the vantage points of have the ability to integrate multiple single-task robots [21]. earthquake design, urban planning, and cutting-edge ma- When discussing the product performance of HCI in the terials [25]. Te evolution of this feld is refected in the context of hazard recognition, we are talking about things vocabulary of the feld itself: terms such as “earthquakes,” such as product attributes, product cost, operation ef- “seismic design,” “seismology,” “architectural design,” and ciency, operation quality, and operation safety. [3]. Both “reinforced concrete” are all part of the study of earthquakes horizontal and vertical comparisons of human resources, and their efects [26]. building material consumption, machine quality, machine Alterations in management structure in this area are power, machine load, movement speed, operation accuracy, refected in the keyword co-occurrence network. In order to etc., as well as comparisons of typical HCI products and make accident prevention and hazard prediction a reality, traditional operation methods, can be used to assess revolutionary changes in organisational management and Advances in Human-Computer Interaction 5 technological methods [3]. Task assessment and quantif- safety technology are essential [27]. Due to the inextricable link between management and construction safety, experts cation, brain-computer interfaces, and experimental para- digms in engineering psychology are now at the centre of are always looking for new ways to enhance the industry’s already stellar safety record. Te evolution of this feld is this feld’s investigation [3]. refected in the rise of new concepts such as decision- making, monitoring, safety training, and risk manage- 3.7.3. Cluster 3: Computer Simulation. A computer simu- ment. After 21years of study, scholars such as Yeo et al. [28] lation, often called an “emulation,” is software designed to consider risk management, risk decision-making, engi- mimic the behaviour of a model of a system in order to learn neering structural health, and safety training in engineering more about that system [33]. Of the 251 articles found, 97 construction to be signifcant areas of inquiry. were directly connected to the search term [3]. With the goal of simulating hazards in construction scenarios through 3.7. Cluster Analysis. Cluster analysis was used to describe simulation software and external parameters, current hazard the most important developments in the feld of CHR-HCI recognition research in computer simulation focuses on [3]. Using optimum computational techniques in statistics, discrete simulation, analogous simulation, simulation based cluster analysis is a way of analysis that may be used to on probe elements, and simulation of stochastic processes or analyse text data and uncover interesting study subjects. In deterministic models [3]. Creating new code and improving this investigation, VOSviewer and CiteSpace were used for upon preexisting systems are both vital parts of this study. cluster analysis, with CiteSpace being used to fne-tune the Discrete event simulation languages such as GPSS, SIM- data obtained by VOSviewer. Log-likelihood ratio, mutual SCRIPT, GASD, CSL, and SIMULA and continuous system information, and greatest word frequency are the three most simulation languages such as DARE, ACSL, CSS, and CSSL commonly used approaches to naming modules in Cite- have been continuously optimised by a large number of Space [29, 30]. Because the names of the modules are so researchers, laying a frm groundwork for human-computer descriptive, we settled on using the highest word frequency interaction technology and fostering the growth of hazard technique to determine which ones existed. recognition [34]. Figure 4 [30] shows the results of the study and opti- mization, which led to the creation of four modules with no clear link between them: computer vision, ergonomics, 3.7.4. Cluster 4: Virtual Reality. Te purpose of virtual re- computer simulation, and virtual reality [3]. ality (VR) technology is to allow people to experience a computer-generated environment with all their senses. [35]. Te 52 articles that were found while searching for this 3.7.1. Cluster 1: Computer Vision. Out of a total of 251 keyword among the 251 results show that the introduction of articles found, 177 were directly relevant to the keyword [3]. virtual reality into the area of hazard detection has great Tis highlights the important role that computer vision plays potential for future growth [3]. Scholars are trying to op- in hazard identifcation investigations. Constant refnement timise dynamic environment modeling, real-time 3D of deep learning techniques such as convolutional neural graphics creation, stereo display and sensor technology, and networks, stacked autoencoder network models, and deep system integration technology from the standpoint of belief networks underpins recent developments in computer technological development [3]. From an application vision technology. Topics such as content-based picture standpoint, virtual reality technology is primarily developed extraction, posture assessment, multimodal data identif- for use in construction risk assessment and worker safety cation, autosomal motion, image tracking, scene re- training. Te expensive cost of manufacture and the un- construction, image recovery, and system integration are reliability of the user’s visual experience are two of virtual crucial areas of study. Tere are two main lines of inquiry in reality’s key technological drawbacks [36]. computer vision related to danger recognition [3]. As an example, Luo et al. [31] have developed models and analysed cognitive connections. 4. Case Studies and Analysis Two case studies are ofered here to highlight how HCI 3.7.2. Cluster 2: Ergonomics. Since 2015, CHR-HCI has been research may include human values throughout the process. strongly tied to ergonomics, which has progressed toward more diversity, humanization, and intelligence [3]. In order to enhance the efciency of danger identifcation, scientists 4.1. Case Study 1: Wearable Vibration-Based Computer [37]. are now using physiological and psychometric methods to Information technology is being put to good use in many investigate the rational coordination link between the facets of modern life. Machines have become more vital due structural-functional, psychological, and mechanical com- to the difculties people have in conveying and processing ponents of the human body and computers [32]. Sixty-fve of information. One of the primary goals of speech recognition the 251 papers retrieved were associated with this keyword, systems is to permit more widespread usage of computer demonstrating that the relationship between construction systems that aid people’s work in a variety of professions by hazard identifcation and ergonomics is sufcient and that allowing them to communicate with one another through a large number of researchers have carefully studied the voice [37]. 6 Advances in Human-Computer Interaction Figure 4: Cluster analysis [30]. Humans rely mostly on verbal exchanges for commu- crucial and permits people with hearing problems to travel safely. Te goal is to develop a product that people with nication [37]. Understanding and identifying the speaker, their gender, age, and emotional state are all possible [38]. hearing difculties can use on a daily basis to improve their Humans’ ability to communicate verbally begins in their lives. Tis will give them instantaneous, real-time access to minds, where a combination of motivation and neuronal additional perception and decision-making skills. activity produces audible speech. Speech is received by the Ketabdar and Polzehl’s research [40] included creating auditory system, which transforms it into neural signals that a smartphone app that would analyse sound, detect vibra- the brain can interpret [39]. tions, and display alerts in the event of a loud event. Tis Te inability to localise the source of a sound is the programme is helpful for the deaf and anyone with hearing primary challenge faced by those with hearing loss. Te impairments sinceit alerts themabout nearbyloud activities. Te mobile phone’s microphone is used by the spoken primary aim in this research [37] was to fnd a way to help the hearing-impaired identify the source of an incoming content analysis algorithm to collect data on the user’s sound and move in that direction. Te other goal was to environment, which is then analysed for any shifts in the make sure that people with hearing loss could still un- level of background noise. When changes occur or other derstand who was talking and how loud they were talking. A circumstances arise, the app alerts the user with visual or voice recognition application’s primary function is to take in vibratory-tactile cues that correspond to the altered speech speech data and generate an approximate translation. To do content. Te user will now know about the mishap [37]. so, the captured audio from the microphone must be With the study of user actions, this algorithm may be im- converted from analogue to digital, after which the char- proved to do even more tasks [40]. As part of their research, acteristics of the acoustic signal can be extracted and used to Shivakumar and Rajasenathipathi used hardware control identify critical features. techniques and a screen input application to link people who are deaf or blind to a computer so that they may use modern Two characteristics of the sound wave itself are very important. Specifcally, we are talking about amplitude and computer technology for communication purposes, such as frequency [37]. Te treble and bass qualities of a sound are vibrating gloves [41]. determined by the frequency, while the intensity and energy Te wearable solution underwent preliminary testing of a sound are established by the amplitude. Analysis and and deployment in the feld. Incoming data were estimated classifcation of acoustic signals are useful for sound rec- in real time, and the user is updated instantly through vi- ognition systems. Real-time tests of the wearable device have brations. As the system reacts and reroutes the user, our also been conducted, and the results have been compared. wearable device predicts the direction once more. Tis Te device, worn by the user as shown in Figure 5 [37], can method was used to determine which of the previously described methods was the most efective, and then that detect the presence of a deaf person by sensing vibrations transmitted through the user’s clothing in real time. method was put into use. Subjects were played recordings of voices coming from a variety of locations and asked to Te primary goal of this research [37] was to determine whether individuals with hearing loss may detect sounds identify their source. Te success of our wearable system was such as brake or horn noises coming from behind them. evaluated by comparing these numbers to those obtained in People who have trouble hearing may experience distress the real world [37]. when they hear noises approaching from behind. Addi- Te second step involves hooking up the system to tionally, the ability to hear the sounds of brakes and horns is a computer and bringing the voices and their instructions Advances in Human-Computer Interaction 7 Figure 5: Testing a wearable device on the user in real time [37]. into the digital realm. Each time the data were gathered from incoming sound is coming 20milliseconds after the vibra- four separate microphones, they were stored in a matrix, and tion has been given, that is, the listener will be able to recognise the sound coming in within 20ms [37]. this process continued until a sizable data set had been amassed. Preprocessing, feature extraction, and classifca- A total of eight directions were used during fve days of testing with four deaf people and two individuals with mild tion were all successful with the data that were generated. Results were compared to the live application and discussed hearing loss, with fndings compared to those of normal in context [37]. participants. Te efectiveness was measured by playing Four microphone ports were integrated into the fnal recordings made from the left, right, front, and back and wearable system (Figure 6 [37]). To ensure clear audio in all identifying the locations where these directions were cardinal directions, four microphones were used. Initial tests intersected. In this research, we analysed the data from four- were conducted with only three microphones, but it was and eight-directional studies and conducted further tests in determined that four were required due to the system’s low both controlled and natural settings [37]. success rates and the fact that there are four main directions. Actual human subjects were employed as sound gen- erators in these real-time studies [37]. An outdoor stroll With the help of the HCI, they were positioned to the right, left, front, and back of the user (Figure 6 [37]). Using four would be interrupted by a call from behind, with the user’s ability to hear the voice being measured. Te computer microphones as opposed to three improved accuracy in experiments, the system’s design called for two vibration system used a loudspeaker to play the audio. In this ex- motor outlet units, one on each fngertip, to indicate the periment, the participant’s left and right fngertips were direction of sound via vibration frequencies. Te high attached to vibration motors, and microphones were placed concentration of nerves in the fngertips is the primary factor on their right, left, behind, and in front of them. For in- in this preference.Furthermore, vibration motors positioned stance, the left fngertip’s vibration motor would activate in on the fngers are more user-friendly and cause less response to a sound coming from the left. Te right and left vibration motors would be used for forward and reverse disruption [37]. Te designed system has four LED outlets, and when movement, respectively. In the forward movement, three quick vibrations from the right-to-left motors would be a sound is detected, the LED of the outlet facing the direction of vibration is illuminated.Tecombination of vibration and produced. Te vibration motors would cycle through three LED lights enhances the user’s ability to identify the correct times of vibration in the back, right, and left directions. Te direction. LEDs were used to provide a visible alert. typical time taken for the user to discern the product’s Meanwhile, the possibility of using four distinct LED lights direction is 70milliseconds. Tis research helped classify for the four cardinal directions is being studied. Tere are individuals based on how loud or quiet they sound, so those LEDs for the user to glance at if they are confused by the with hearing impairments could pay attention. When vibrations. In this investigation, vibration serves to stimulate someone was making a loud noise nearby, for instance, those the sensation of touch in those who are deaf or hard of with hearing impairments might still comprehend what was going on and behave accordingly. hearing. Hearing-impaired people will have a better chance of comprehending and feeling at ease if they can commu- nicate with others via touch [37]. 4.2. Case Study 2: Context-Aware Navigation System [42]. A 32-bit MCU based on the ARM architecture and fash In mobile navigation contexts, context awareness is a fas- storage were included in the creation [37]. It has a maximum cinating issue due to the great degree of application-specifc frequency of 72MHz, a 3.6V application supply, seven change. Not just during development but also in real time timers, two ADCs, and nine communications. Te wearable when the device is being used, navigation services take the gadget that we created ran on rechargeable batteries. On the user’s current circumstances into account. A user’s behav- batteries, about 10hours of run time are possible. Vibration iour and the device’s location are two examples of allows people to detect the direction from which an 8 Advances in Human-Computer Interaction Figure 6: Te developed wearable system [37]. circumstances that might infuence the services that a mobile study was performed using a Samsung Galaxy Note 1 navigation app ofers. Tis article [42] addresses the prob- smartphone. Tis proposed context-aware model for navi- lems of context-aware systems, which include acquiring gation services uses a client-server architecture for its soft- context, interpreting context, and adapting applications to ware.Tisarchitectureallows fortheseparationofapplication logicbetweentheuser’slocalAndroiddeviceandaserver-side context. Te work proposes a method for strengthening the precision and dependability of context-aware navigation resource with access to more extensive data stores and processing capabilities. Examples include sending the average systems via the use of inexpensive sensors in a multilevel fusion strategy. Te experiments show that smartphones value of a window of recorded accelerometer data from a local may be used for outdoor navigation with the help of context- Android device to a web server for comparison against aware personal navigation systems (PNS) [42]. a database of context patterns. Wi-Fi allows for instantaneous Applications that are “context-aware” take external data synchronisation with a server. An app is built to snag factors, such as the user’s actions, into account when making information from a mobile device and transmit it to a server judgments about the user and/or the environment. While [42]. Tis software creates timestamped data that can be used many approaches have been explored for automatic context in real time. Te main software and the user’s data are stored and environment recognition for context-aware applications on servers in a remote location, and the end users access the applications through a lightweight mobile application. Au- (such as healthcare, sports, and social networking), there is still room for improvement. Te study presented in [42], for tomatically or at the user’s urging, all relevant sensor data for detection were preprocessed and sent to the server. Te next example, is one of the frsts to apply user activity context to PNS, and more specifcally, vision-aided navigation [43]. For step involves sending the results of the context detection and the purpose of recognising and using context in PNS ap- navigationsolutionbacktothemobileuser.Twomenandtwo plications, a new hybrid paradigm was introduced. When women, ranginginagefrom26 to40,participatedinthe study using a navigation app, the user’s current activity (such as to provide data on their physical activities [42]. Testing data walking or driving) and the device’s current location and were collected with the smartphone in a variety of positions, orientation provide valuable context. such as in a purse, a jacket pocket, on a belt, held close to the ear while talking, and at the user’s side while the arm was In the feld of pervasive computation, Caetano suggested using a hybrid mythology to combine the best features of swung. Te only restriction on how the smartphone should be worn is where on the body it should be kept. After two data-driven and knowledge-driven approaches [44]. Arato et al. proposed a knowledge-driven hybrid method for minutes, data from each activity with a unique device placement mode were saved to the server’s database (DB). continuous and real-time activity recognition in smart homes via the use of multisensor data [45]. In this research, Subjects were asked to mark the beginning and end times of ontology-based semantic reasoning and classifcation are their primary activities in order to construct the reference used for activity recognition, but domain knowledge is data [42]. heavily leveraged throughout the entire process [42]. Tose sensors that correlate most strongly with the An activity recognition module is created to determine activity classes are the most optimal for activity recognition. which sensors and features best aid in the development of In order to detect motion, accelerometer sensors have be- come increasingly popular. Te gyroscope can record the a reliable context detection algorithm. With the help of the activity recognition module and a battery of experiments, it user’s movements and the device’s new orientation. When trying to diferentiate between groups of on-body device waspossible to gaugehow wellit performed across a varietyof user motions and modes [42]. Te data collection for this placements and identify the device’s orientation in each Advances in Human-Computer Interaction 9 Figure 7: Calibration process [42]. placement, orientation determination is a crucial feature or risks involved and theory pertaining to the actual act [42]. In addition to assisting with orientation and heading of recognising or identifying the hazards. Teoretical determination, magnetometer sensors also provide absolute guidance for the implementation of HCI technologies heading information. Device orientation can also be esti- may be found in felds including risk psychology, er- gonomics, human factors engineering, behavioural mated using the orientation software sensor (or soft sensor) made available by the Android API. Te orientation angles psychology, and sociology. Academics have paid a lot of are generated by fusing three signals from an accelerometer, attention to engineering ethics because of its supervi- a gyroscope, and a magnetometer in this sensor. Te values sory role in scientifc experiments, and this is because of of these angles characterise the relationship between the the importance of engineering as science and tech- device’s coordinate system and the regional navigational nology progress. As such, engineering ethics should be reference frame. Te orientation soft-sensor’s output can taken into account as a fundamental compass for stand on its own as a sensor or be used to transform data identifying potential dangers [47, 48]. from one coordinate system (the device’s) to another (the In terms of real-world implementation, hazard rec- reference navigation system). Multiple sensors’ context ognition should fnd most use in computer simulation, recognition outputs have been analysed as a whole [42]. computer vision, VR/AR, and robotics [49]. Te three Calibration and noise reduction are applied to the raw issues we have highlighted are where we think re- data captured by sensors, as depicted in Figure 7 [42]. Signal searchers should focus in the future when studying processing algorithms are then applied to the data inorder to hazard recognition. First, researchers want to fnd ef- extract useful features. Although there is a vast pool of fcient ways to process multimodal data in hazard features from which to choose, only a few should be recognition experiments, and second, they want to use implemented for reliable, real-time context recognition [42]. these data to create intuitive devices for hazard rec- Afterwards, the feature space can be classifed using clas- ognition. Te end goal is to create a user-friendly sifcation methods. Tervo et al. [46] noted that there is a wide platform for managing safety measures that uses range of feature extraction and classifcation methods and multimodal data. Accordingly, these three areas of that the best method to use is often context-specifc. study have seen some practical application and also point in clear future directions. 5. Conclusion Data Availability Tis paper proposes a framework to categorise the CHR-HCI feld into three levels, acknowledging that Te data that support the fndings of this study are available human-computer interaction is an emerging in- from the corresponding author upon request. terdisciplinary feld encompassing numerous disciplines and that hazard recognition also requires complex theoretical Conflicts of Interest knowledge and practical techniques. Te papers reviewed several related work in the feld of CHR-HCI and analyized Te authors declare that they have no conficts of interest. two related case studies. From a research perspective, hazard identifcation is Acknowledgments interested in the construction industry’s practise of Tis work was partially funded by Middle East University. fnding, perceiving, and recognising dangers and their infuencing variables for the sake of risk assessment, accident prevention, foresight, prediction, and in- References telligent monitoring. Te primary improvement in [1] A. Albert, M. R. Hallowell, M. Skaggs, and B. Kleiner, engineering safety driving philosophy during the last “Empirical measurement and improvement of hazard rec- 21years has been the shift from postaccident analysis to ognition skill,” Safety Science, vol. 93, pp. 1–8, 2017. preaccident prediction and prevention, made possible [2] B. J. Ladewski and A. J. Al-Bayati, “Quality and safety by advancements in human-computer interface tech- management practices: the theory of quality management nology. As a result, this is one reason why we are approach,” Journal of Safety Research, vol. 69, pp. 193–200, pushing for the widespread use of HCI methods. Teoretically speaking, there are two basic components [3] J. Wang, R. Cheng, M. Liu, and P. C. Liao, “Research trends of to hazard recognition: theory pertaining to the hazards human-computer interaction studies in construction hazard 10 Advances in Human-Computer Interaction recognition: a bibliometric review,” Sensors, vol. 21, no. 18, [20] C. M. Gray, “It’s more of a mindset than a method, UX p. 6172, 2021. practitioners’ conception of design methods,” in Proceedings [4] N. Aniekwu, “Accidents and safety violations in the Nigerian of the 2016 CHI Conference on Human Factors in Computing construction industry,” Journal of Science and Technology, Systems, pp. 4044–4055, San Jose, CA,USA, May 2016. vol. 27, pp. 81–89, 2007. [21] Y. Ikeda, “Te automated building construction system for [5] A. Schulte, D. Donath, D. S. Lange, and R. S. Gutzwiller, “A high-rise steel structure buildings,” in Proceedings of the heterarchical urgency-based design pattern for human au- Council on Tall Buildings and Urban Habitat (CTBUH), Seoul, tomation interaction,” in Proceedings of the 15th International Korea, October 2004. Conference on Engineering Psychology and Cognitive Ergo- [22] T. Wakisaka, N. Furuya, Y. Inoue, and T. Shiokawa, “Au- nomics, Las Vegas, NV, USA, July 2018. tomated construction system for high-rise reinforced concrete [6] F. Tosi, Design for Ergonomics, Springer, Manhattan, NY, buildings,” Automation in Construction, vol. 9, no. 3, USA, 2020. pp. 229–250, 2000. [7] Y. C. Hsu and I. Nourbakhsh, “When human-computer in- [23] D. Borys, “Te role of safe work method statements in the teraction meets community citizen science,” Communications Australian construction industry,” Safety Science, vol. 50, of the ACM, vol. 63, no. 2, pp. 31–34, 2020. no. 2, pp. 210–220, 2012. [8] D. A. Kocsis, “A conceptual foundation of design and [24] M. Zhang, T. Cao, and X. Zhao, “Applying sensor-based implementation research in accounting information systems,” technology to improve construction safety management,” International Journal of Accounting Information Systems, Sensors, vol. 17, no. 8, p. 1841, 2017. vol. 34, Article ID 100420, 2019. [25] S. Farahani, A. Tahershamsi, and B. Behnam, “Earthquake and [9] M. Jeon, R. Fiebrink, E. A. Edmonds, and D. Herath, “From post-earthquake vulnerability assessment of urban gas pipe- rituals to magic: interactive art and HCI of the past, present, lines network,” Natural Hazards, vol. 101, no. 2, pp. 327–347, and future,” International Journal of Human-Computer 2020. Studies, vol. 131, pp. 108–119, 2019. [26] M. D. Joyner and M. Sasani, “Building performance for [10] F. Gutierrez, ´ N. N. Htun, F. Schlenz, A. Kasimati, and earthquake resilience,” Engineering Structures, vol. 210, Ar- K. Verbert, “A review of visualisations in agricultural decision ticle ID 110371, 2020. support systems: an HCI perspective,” Computers and Elec- [27] G. Fu, X. C. Xie, Q. S. Jia, W. Q. Tong, and Y. Ge, “Accidents tronics in Agriculture, vol. 163, Article ID 104844, 2019. analysis and prevention of coal and gas outburst: un- [11] V. Righi, S. Sayago, and J. Blat, “When we talk about older derstanding human errors in accidents,” Process Safety and people in HCI, who are we talking about? In the design of Environmental Protection, vol. 134, pp. 1–23, 2020. technologies for a growing and ageing population, there is [28] C. J. Yeo, J. H. Yu, and Y. Kang, “Quantifying the efectiveness a “turn to community,” International Journal of Human- of IoT technologies for accident prevention,” Journal of Computer Studies, vol. 108, pp. 15–31, 2017. Management in Engineering, vol.36, no.5, Article ID4020054, [12] B. Sumak, M. Spindler, M. Debeljak, M. Hericko, ˇ and 2020. M. Puˇsnik, “An empirical evaluationof a hands-free computer [29] L. Waltman, N. J. van Eck, and E. C. M. Noyons, “A unifed interaction for users with motor disabilities,” Journal of approach to mapping and clustering of bibliometric net- Biomedical Informatics, vol. 96, Article ID 103249, 2019. works,” Journal of Informetrics, vol. 4, pp. 629–635, 2010. [13] X. Mao, K. Li, Z. Zhang, and J. Liang, “Te design and [30] T. Ganbat, H. Y. Chong, P. C. Liao, Y. D. Wu, and X. B. Zhao, implementation of a new smart home control system based on “A bibliometric review on risk management and building the internet of things,” in Proceedings of the 2017 International information modeling for international construction,” Ad- Smart Cities Conference (ISC2), pp. 1–5, IEEE, Wuxi, China, vances in Civil Engineering, vol. 2018, Article ID 8351679, September 2017. 13 pages, 2018. [14] S. S. Rautaray and A. Agrawal, “Vision-based hand gesture [31] H. Luo, M. Wang, P. K. Y. Wong, J. Tang, and J. C. P. Cheng, recognition for human-computer interaction: a survey,” Ar- “Construction machine pose prediction considering historical tifcial Intelligence Review, vol. 43, no. 1, pp. 1–54, 2015. motions and activity attributes using gated recurrent unit [15] A. Pimenta, D. Carneiro, J. Neves, and P. Novais, “A neural (GRU),” Automation in Construction, vol. 121, Article ID network to classify fatigue from human–computer in- 103444, 2021. teraction,” Neurocomputing, vol. 172, pp. 413–426, 2016. [32] H. Zhang, X. Yan, and H. Li, “Ergonomic posture recognition [16] A. Esposito, A. M. Esposito, and C. Vogel, “Needs and using 3D view-invariant features from single ordinary cam- challenges in human computer interaction for processing era,” Automation in Construction, vol. 94, pp. 1–10, 2018. social emotional information,” Pattern Recognition Letters, [33] K. Kim, J. Chen, and Y. K. Cho, “Evaluation of machine vol. 66, pp. 41–51, 2015. learning algorithms for worker’s motion recognition using [17] K. Shilton, “Values and ethics in human-computer in- motion sensors,” Computing in Civil Engineering 2019: Data, teraction,” Foundations and Trends in Human–Computer Sensing, and Analytics, pp. 51–58, American Society of Civil Engineers, Reston, VA, USA, 2019. Interaction, vol. 12, no. 2, pp. 107–171, 2018. [18] A. W. Eide, J. B. Pickering, T. Yasseri et al., “Human-machine [34] A. Asadzadeh, M. Arashpour, H. Li, T. Ngo, A. Bab-Hadia- networks: toward a typology and profling framework,” Hu- shar, and A. Rashidi, “Sensor-based safety management,” man-Computer Interaction. Teory, Design, Development and Automation in Construction, vol.113, Article ID 103128, 2020. Practice, pp. 11–22, Springer, Manhattan, NY, USA, 2016. [35] Z. Zhou, Y. M. Goh, and Q. Li, “Overview and analysis of [19] G. Jacucci, A. Spagnolli, J. Freeman, and L. Gamberini, safety management studies in the construction industry,” “Symbiotic interaction: a critical defnition and comparison to Safety Science, vol. 72, pp. 337–350, 2015. other human-computer paradigms,” in Symbiotic Interaction. [36] Z. Hu, J. Zhang, and X. Zhang, “Construction collision de- Lecture Notes in Computer Science, 8820, G. Jacucci, tection for site entities based on 4-D space-time model,” Journal of Tsinghua University Science and Technology, vol. 50, L. Gamberini, J. Freeman, and A. Spagnolli, Eds., Springer Cham, Manhattan, NY, USA, 2015. no. 6, pp. 820–825, 2010. Advances in Human-Computer Interaction 11 [37] M. Yagano ˘ glu ˘ and C. Kose, ¨ “Wearable vibration based computer interaction and communication system for deaf,” Applied Sciences, vol. 7, no. 12, p. 1296, 2017. [38] A. Milton and S. Tamil Selvi, “Class-specifc multiple classi- fers scheme to recognize emotions from speech signals,” Computer Speech & Language, vol. 28, no. 3, pp. 727–742, [39] B. Schuller, S. Steidl, A. Batliner et al., “Paralinguistics in speech and language—state-of-the-art and the challenge,” Computer Speech & Language, vol. 27, no. 1, pp. 4–39, 2013. [40] H. Ketabdar and T. Polzehl, “Tactile and visual alerts for deaf people by mobile phones ACM,” in Proceedings of the 11th international ACM SIGACCESS Conference on Computers and Accessibility, pp. 25–28, Pittsburgh, PA, USA, October 2009. [41] B. L. Shivakumar and M. Rajasenathipathi, “A new approach for hardware control procedure used in braille glove vibration system for disabled persons,” Research Journal of Applied Sciences, Engineering and Technology, vol. 7, no. 9, pp. 1863–1871, 2014. [42] S. Saeedi, A. Moussa, and N. El-Sheimy, “Context-aware personal navigation using embedded sensor fusion in smartphones,” Sensors, vol. 14, no. 4, pp. 5742–5767, 2014. [43] U. Gollner, T. Bieling, and G. Joost, “Mobile Lorm Glove: introducing a communication device for deaf-blind people,” in Proceedings of the 6th International Conference on Tangible, Embedded, and Embodied Interaction, pp. 127–130, ACM, Kingston, ON, Canada, February 2012. [44] G. Caetano and V. Jousmaki, “Evidence of vibrotactile input to human auditory cortex,” NeuroImage, vol. 29, no. 1, pp. 15–28, 2006. [45] A. Arato, N. Markus, and Z. Juhasz, “Teaching Morse lan- guage to a deaf-blind person for reading and writing SMS on an ordinary vibrating smartphone,” in Computers Helping People with Special Needs, vol. 14, pp. 393–396, Springer International Publishing, Manhattan, NY, USA, 2014. [46] S. Tervo, J. Patynen, ¨ N. Kaplanis, e. Al, S. Bech, and T. Lokki, “Spatial analysis and synthesis of car audio system and car cabin acoustics with a compact microphone array,” Journal of the Audio Engineering Society, vol. 63, no. 11, pp. 014–925, [47] H. Li, G. Chan, J. K. W. Wong, and M. Skitmore, “Real-time locating systems applications in construction,” Automation in Construction, vol. 63, pp. 37–47, 2016. [48] A. Montaser and O. Moselhi, “RFID indoor location iden- tifcation for construction projects,” Automation in Con- struction, vol. 39, pp. 167–179, 2014. [49] R. Maalek and F. Sadeghpour, “Accuracy assessment of Ultra- Wide Band technology in tracking static resources in indoor construction scenarios,” Automation in Construction, vol. 30, pp. 170–183, 2013.

Journal

Advances in Human-Computer InteractionHindawi Publishing Corporation

Published: Feb 14, 2023

There are no references for this article.