The accelerated move towards the adoption of the industrial Internet of Things (IIoT) paradigm has resulted in numerous shortcomings as far as security is concerned. One of the IIoT affecting critical security threats is what is termed as the ” False Data Injection” (FDI) attack. The FDI attacks aim to mislead the industrial platforms by falsifying their sensor measurements. FDI attacks have successfully overcome the classical threat detection approaches. In this study, we present a novel method of FDI attack detection using Autoencoders (AEs). We exploit the sensor data correlation in time and space, which in turn can help identify the falsified data. Moreover, the falsified data are cleaned using the denoising AEs. Performance evaluation proves the success of our technique in detecting FDI attacks. It also significantly outperforms a support vector machine (SVM) based approach used for the same purpose. The denoising AE data cleaning algorithm is also shown to be very effective in recovering clean data from corrupted (attacked) data.
One of the challenges for a successful use of wireless sensor networks in process industries is to design networks with energy efficient transmission, to increase the lifetime of the deployed network while maintaining the required latency and bit-error rate. The design of such transmission schemes depend on the radio channel characteristics of the region. This paper presents an investigation of the statistical properties of the radio channel in a typical process industry, particularly when the network is meant to be deployed for a long time duration, e.g., days, weeks, and even months. Using 17–20-h-long extensive measurement campaigns in a rolling mill and a paper mill, we highlight the non-stationarity in the environment and quantify the ability of various distributions, given in the literature, to describe the variations on the links. Finally, we analyze the design of an optimal received signal-to-noise ratio (SNR) for the deployed nodes and show that improper selection of the distribution for modeling of the variations in the channel can lead to an overuse of energy by a factor of four or even higher.
Wireless use cases in industrial internet-of-thing (IIoT) networks often require guaranteed data rates ranging from a few kilobits per second to a few gigabits per second. Supporting such a requirement in a single radio access technique is difficult, especially when bandwidth is limited. Although non-orthogonal multiple access (NOMA) can improve the system capacity by simultaneously serving multiple devices, its performance suffers from strong user interference. In this paper, we propose a Q-learning-based algorithm for handling many-to-many matching problems such as bandwidth partitioning, device assignment to sub-bands, interference-aware access mode selection (orthogonal multiple access (OMA), or NOMA), and power allocation to each device. The learning technique maximizes system throughput and spectral efficiency (SE) while maintaining quality-of-service (QoS) for a maximum number of devices. The simulation results show that the proposed technique can significantly increase overall system throughput and SE while meeting heterogeneous QoS criteria.
In this work, we consider the design of a radio resource management (RRM) solution for traffic steering (TS) use-case in the open radio access network (O-RAN). The O-RAN TS deals with the quality-of-service (QoS)-aware steering of the traffic by connectivity management (e.g., device-to-cell association, radio spectrum, and power allocation) for emerging heterogeneous networks (HetNets) in 5G-and-beyond systems. However, TS in HetNets is a complex problem in terms of efficiently assigning/utilizing the radio resources while satisfying the diverse QoS requirements of especially the cell-edge users due to their poor signal-to-interference-plus-noise ratio (SINR). In this respect, we propose an intelligent non-orthogonal multiple access (NOMA)-based RRM technique for a small cell base station (SBS) within a macro gNB. A Q-learning-assisted algorithm is designed to allocate the transmit power and frequency sub-bands at the O-RAN control layer such that interference from macro gNB to SBS devices is minimized while ensuring the QoS of the maximum number of devices. The numerical results show that the proposed method enhances the overall spectral efficiency of the NOMA-based TS use case without adding to the system's complexity or cost compared to traditional HetNet topologies such as co-channel deployments and dedicated channel deployments.
This paper proposes a novel partial non-orthogonal multiple access (P-NOMA)-based semi-integrated sensing and communication (ISaC) system design. As an example ISaC scenario, we consider a vehicle simultaneously receiving the communication signal from infrastructure-to-vehicle (I2V) and sensing signal from vehicle-to-vehicle (V2V). P-NOMA allows exploiting both the orthogonal multiple access (OMA) and NOMA schemes for interference reduction and spectral efficiency (SE) enhancement while providing the flexibility of controlling the overlap of the sensing and communication signals according to the channel conditions and priority of the sensing and communication tasks. In this respect, we derive the closed-form expressions for communication outage probability and sensing probability of detection in Nakagami-m fading by considering the interference from the composite sensing channel. Our extensive analysis allows capturing the performance trade-offs of the communication and the sensing tasks with respect to various system parameters such as overlapping partial NOMA parameter, target range, radar cross section (RCS), and parameter m of the Nakagami-m fading channel. Our results show that the proposed P-NOMA-based semi-ISaC system outperforms the benchmark OMA-and NOMA-based systems in terms of communication spectral efficiency and probability of detection for the sensing target.
The fifth generation (5G) of cellular networks provides the enabling environment for the Internet of Things (IoT) applications. Hence, the vast proliferation of 5G-enabled IoT devices and services led to an overwhelming growth of data traffic that could saturate the core network's backhaul links. Nowadays, caching is an unavoidable technique to solve this issue, whereby popular contents are stored on edge nodes near to end-users. There exist several initiatives to motivate caching actors for improving the caching process, but not designed for the real-world competitive caching market. In this work, we propose an incentive caching strategy in a 5G-enabled IoT network by considering a completely competitive caching scenario with multiple 5G mobile network operators (MNOs) and multiple content providers (CPs). The MNOs manage a set of edge caches on their base stations and they are competing to fill these caching resources, while the CPs detain a set of popular contents and are in conflict to rent the MNOs’ caches. Each MNO aims to maximize its monetary profit and offload its backhaul links, as each CP aims to improve the quality of experience (QoE) of its end-users. Then, we formulate a multi-leader multi-follower Stackelberg game to model the interaction between MNOs and CPs and define the different players’ utilities. Subsequently, we propose an iterative algorithm based on the convex optimization method to investigate the Stackelberg equilibrium. Finally, the numerical results of the different experimentations demonstrate that our game-based incentive strategy can significantly alleviate the backhaul links while improving the user QoE.
The development of the fifth generation (5G) of cellular systems enables the realization of densely connected, seamlessly integrated, and heterogeneous device networks. While 5G systems were developed to support the Internet of Everything (IoE) paradigm of communication, their mass-scale implementations have excessive capital deployment costs and severely detrimental environmental impacts. Hence, these systems are not feasibly scalable for the envisioned real-time, high-rate, high-reliability, and low-latency requirements of connected consumer, commercial, industrial, healthcare, and environmental processes of the IoE network. The IoE vision is expected to support 30 billion devices by 2030, hence, green communication architectures are critical for the development of next-generation wireless systems. In this context, intelligent reflecting surfaces (IRS) have emerged as a promising disruptive technological advancement that can adjust wireless environments in an energy-efficient manner. This work utilizes and analyzes a multi-node distributed IRS-assisted system in variable channel conditions and resource availability. We then employ machine learning and optimization algorithms for efficient resource allocation and system design of a distributed IRS-enabled industrial Internet of Things (IoT) network. The results show that the proposed data-driven solution is a promising optimization architecture for high-rate, next-generation IoE applications.
Localization has gained great attention in recent years, where different technologies have been utilized to achieve high positioning accuracy. Fingerprinting is a common technique for indoor positioning using short-range radio frequency (RF) technologies such as Bluetooth Low Energy (BLE). In this paper, we investigate the suitability of LoRa (Long Range) technology to implement a positioning system using received signal strength indicator (RSSI) fingerprinting. We test in real line-of-sight (LOS) and non-LOS (NLOS) environments to determine appropriate LoRa packet specifications for an accurate RSSI-to-distance mapping function. To further improve the positioning accuracy, we consider the environmental context. Extensive experiments are conducted to examine the performance of LoRa at different spreading factors. We analyze the path loss exponent and the standard deviation of shadowing in each environment
The exponential growth in global mobile data traffic, especially with regards to the massive deployment of devices envisioned for the fifth generation (5G) mobile networks, has given impetus to exploring new spectrum opportunities to support the new traffic demands. The millimeter wave (mmWave) frequency band is considered as a potential candidate for alleviating the spectrum scarcity. Moreover, the concept of multi-tier networks has gained popularity, especially for dense network environments. In this article, we deviate from the conventional multi-tier networks and employ the concept of control-data separation architecture (CDSA), which comprises of a control base station (CBS) overlaying the data base station (DBS). We assume that the CBS operates on the sub-6 GHz single band, while the DBS possesses a dual-band mmWave capability, i.e., 26 GHz unlicensed band and 60 GHz licensed band. We formulate a multi-objective optimization (MOO) problem, which jointly optimizes conflicting objectives: the spectral efficiency (SE) and the energy efficiency (EE). The unique aspect of this work includes the analysis of a joint radio resource allocation algorithm based on Lagrangian Dual Decomposition (LDD) and we compare the proposed algorithm with the maximal-rate (maxRx), dynamic sub-carrier allocation (DSA) and joint power and rate adaptation (JPRA) algorithms to show the performance gains achieved by the proposed algorithm.
With the growing popularity of Internet-of-Things (IoT)-based smart city applications, various long-range and low-power wireless connectivity solutions are under rigorous research. LoRa is one such solution that works in the sub-GHz unlicensed spectrum and promises to provide long-range communication with minimal energy consumption. However, the conventional LoRa networks are single-hop, with the end devices connected to a central gateway through a direct link, which may be subject to large path loss and hence render low connectivity and coverage. This article motivates the use of multi-hop LoRa topologies to enable energy-efficient connectivity in smart city applications. We present a case study that experimentally evaluates and compares single-hop and multi-hop LoRa topologies in terms of range extension and energy efficiency by evaluating packet reception ratio (PRR) for various source to destination distances, spreading factors (SFs), and transmission powers. The results highlight that a multi-hop LoRa network configuration can save significant energy and enhance coverage. For instance, it is shown that to achieve a 90% PRR, a two-hop network provides 50% energy savings as compared to a single-hop network while increasing 35% coverage at a particular SF. In the end, we discuss open challenges in multi-hop LoRa deployment and optimization.
Although Internet-of-Things (IoT) is revolutionizing the IT sector, it is not mature yet as several technologies are still being offered to be candidates for supporting the backbone of this system. IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) is one of those promising candidate technologies to be adopted by IoT and Industrial IoT (IIoT). Attacks against RPL have shown to be possible, as the attackers utilize the unauthorized parent selection system of the RLP protocol. In this work, we are proposing a methodology and architecture to detect intrusions against IIoT. Especially, we are targeting to detect attacks against RPL by using genetic programming. Our results indicate that the developed framework can successfully (with high accuracy, along with high true positive and low false positive rates) detect routing attacks in RPL-based Industrial IoT networks.
Wireless sensor network communication in industrial environments is compromised by interference, multipath fading, and signal attenuation. In that respect, accurate channel diagnostics is imperative to selecting the adequate countermeasures. This paper presents the lightweight packet error discriminator (LPED) that infers the wireless link condition by distinguishing between errors caused by multipath fading and attenuation, and those inflicted by interfering wideband single-channel communication systems (e.g., IEEE 802.11b/g), based on the differences in their error footprints. The LPED uses forward error correction in a novel context, namely, to determine the symbol error density, which is then fed to a discriminator for error source classification. The classification criteria are derived from an extensive set of error traces collected in three different types of industrial environments, and verified on a newly collected set of error traces. The proposed solution is evaluated both offline and online, in terms of classification accuracy, speed of channel diagnostics, and execution time. The results show that in ≥91% of cases, a single packet is sufficient for a correct channel diagnosis, accelerating link state inference by at least 270%, compared with the relevant state-of-the-art approaches. The execution time of LPED, for the worst case of packet corruption and maximum packet size, is below 30 ms with ≤3% of device memory consumption. Finally, live tests in an industrial environment show that LPED quickly recovers from link outage, by losing up to two packets on average, which is only one packet above the theoretical minimum.
Forward Error Correction is a preemptive manner of improving communication reliability. Albeit not a part of IEEE 802.15.4-2006 standard, its application in Industrial Wireless Sensor Networks has been widely considered. Nevertheless, this study is the first performance analysis on real error traces with sufficiently lightweight channel codes, with respect to IEEE 802.15.4-2006 and industrial wireless communication timing constraints. Based on these constraints and bit error properties from the collected traces, the use of Reed-Solomon (15,7) block code is suggested, which can be implemented in software. Experiments show that bit error nature on links affected by multipath fading and attenuation in industrial environments is such that RS(15,7) can correct ≥95% of erroneously received packets, without the necessity for interleaving. On links under IEEE 802.11 interference, typically up to 50% of corrupted packets can be recovered by combining RS(15,7) with symbol interleaving, which has proven to be more effective than its bit counterpart. The optimal interleaving depth is found empirically and it is shown that simple bit-interleaved 1/3 repetition code achieves at least 90% of correcting performance of RS(15,7) code on uninterfered links that operate ≥10 dB above the sensitivity threshold.
Communication reliability is the ultimate priority in safety-critical wireless sensor network (WSN) communication. Surprisingly enough, the enormous potential of error control on direct sequence spread spectrum (DSSS) chips in IEEE 802.15.4 has been completely overlooked by the WSN community, possibly due to incorrect presumptions, such as the concerns about computational overhead. Current error-correction schemes in WSN counteract the error process once the errors have already propagated to bit- and packet-level. Motivated by the notion that errors should be confronted at the earliest stage, this work presents CLAP, a novel method that tremendously improves the error correction in WSN by fortifying the IEEE 802.15.4 Physical layer (PHY) with straightforward manipulations of DSSS chips. CLAP is implemented on a software-defined radio platform, and evaluated on real error traces from heavily WLAN-interfered IEEE 802.15.4 transmissions at 3 different environments. CLAP boosts the number of corrected packets by 1.78-6.88 times on severely interfered links, compared to two other state-of-the-art schemes. The overhead in terms of computational complexity is about 10% of execution time of the OQPSK demodulator in the legacy IEEE 802.15.4 receiver chain.
Three major obstacles to wireless communication are electromagnetic interference, multipath fading and signal attenuation. The former stems mainly from collocated wireless systems operating in the same frequency band, while the latter two originate from physical properties of the environment. Identifying the source of packet corruption and loss is crucial, since the adequate countermeasures for different types of threats are essentially different. This problem is especially pronounced in industrial monitoring and control applications, where IEEE 802.15.4 communication is expected to deliver data within tight deadlines, with minimal packet loss. This work presents the Lightweight Packet Error Discriminator (LPED) that distinguishes between errors caused by multipath fading and attenuation, and those inflicted by IEEE 802.11 interference. LPED uses Forward Error Correction to determine the symbol error positions inside erroneously received packets and calculates the error density, which is then fed to a discriminator for error source classification. The statistical constituents of LPED are obtained from an extensive measurement campaign in two different types of industrial environments. The classifier incurs no overhead and in ≥90% of cases a single packet is sufficient for a correct channel diagnosis. Experiments show that LPED accelerates link diagnostics by at least 190%, compared to the relevant state-of-the-art approaches.
The requirements of safety-critical wireless sensornetwork (WSN) applications, such as closed-loop control andtraffic safety, cannot be met by the IEEE 802.15.4-2006 standardnor its industrial WSN (IWSN) derivatives. The main problem inthat respect is the communication reliability, which is seriouslycompromised by 2.4-GHz interference, as well as multipathfading and attenuation (MFA) at industrial facilities. Meanwhile,communication blackouts in critical WSN applications maylead to devastating consequences, including production halts,damage to production assets and can pose a threat to safetyof human personnel. This work presents PREED, a method toimprove the reliability by exploiting the determinism in IWSNcommunication. The proposed solution is based on the analysisof bit error traces collected in real transmissions at four differentindustrial environments. A case study on WirelessHART packetformat shows that PREED recovers 42%-134% more packetsthan the competing approaches on links compromised by WLANinterference. In addition, PREED reduces one of the most trivialcauses of packet loss in IWSN, i.e. the corruption offrame lengthbyte, by 88% and 99%, for links exposed to WLAN interferenceand MFA, respectively.
The knowledge of error nature in wireless channels is an essential constituent of efficient communication protocol design. To this end, this paper is the first comprehensive bit- and symbol-level analysis of IEEE 802.15.4 transmission errors in industrial environments. The intention with this paper is to extract the error properties relevant for future improvements of wireless communication reliability and coexistence of radio systems in these harsh conditions. An extensive set of bit-error traces was collected in a variety of scenarios and industrial environments, showing that error behavior is highly dependent on the cause of packet corruption. It is shown that errors inflicted by multipath fading and attenuation exhibit different properties than those imposed by IEEE 802.11 interference. The statistical behavior of these two patterns is concurrently investigated in terms of differences in bit-error distribution, error burst length, channel memory length, and the scale of packet corruption. With these conclusions at hand, abiding to the computational constraints of embedded sensors and the statistical properties of bit-errors, a Reed-Solomon $(15,k)$ block code is chosen to investigate the implications of bit-error nature on practical aspects of channel coding and interleaving. This paper is concluded by a number of findings of high practical relevance, concerning the optimal type, depth, and meaningfulness of interleaving.
The ease of acquiring hardware-based link quality indicators is an alluring property for fast channel estimation in time- and safety-critical Wireless Sensor Network (WSN) applications, such as closed-loop control and interlocking. The two rudimentary hardware-based channel quality metrics, Received Signal Strength (RSS) and Link Quality Indicator (LQI), are the key constituents of channel estimation and a plethora of other WSN functionalities, from routing to transmit power control. Nevertheless, this study highlights three deficient aspects of these two indicators: 1) overall deceptiveness, i.e. the inability to reveal the presence of interference, falsely indicating excellent channel conditions in an unacceptably high fraction of cases; 2) the burstiness of missed detections, which compromises the attempts to eliminate the deceptiveness by averaging; 3) high mutual discrepancy of the two indicators, observed in 39-73% of packets, throughout different scenarios. The ability of RSS and LQI to indicate IEEE 802.11 interference is scrutinized in a variety of scenarios in typical industrial environments, using commercialoff- the-shelf hardware and realistic network topologies, giving the findings of this study a high general validity and practical relevance.
The applications of Industrial Wireless Sensor and Actuator Networks (IWSAN) are time-critical and subject to strict requirements in terms of end-to-end delay and reliability of data delivery. A notable shortcoming of the existing wireless industrial communication standards is the existence of overcomplicated routing protocols, whose adequacy for the intended applications is questionable [1]. This paper evaluates the potentials of flooding as a data dissemination technique in IWSANs. The concept of flooding is recycled by introducing minimal modifications to its generic form and compared with a number of existing WSN protocols, in a variety of scenarios. The simulation results of all scenarios observed show that our lightweight approach is able to meet stringent performance requirements for networks of considerable sizes. Furthermore, it is shown that this solution significantly outperforms a number of conventional WSN routing protocols in all categories of interest.
In this paper we address the issues of timeliness and transmission reliability of existing industrial communication standards. We combine a Forward Error Correction coding scheme on the Medium Access Control layer with a lightweight routing protocol to form an IEEE 802.15.4-conformable solution, which can be implemented into already existing hardware without violating the standard. After laying the theoretical foundations, we conduct a performance evaluation of the proposed solution. The results show a substantial gain in reliability and reduced latency, compared to the uncoded transmissions, as well as common Wireless Sensor Network routing protocols.
Reconfigurable intelligent surfaces (RISs), with the potential to realize smart radio environments, have emerged as an energy-efficient and a cost-effective technology to support the services and demands foreseen for coming decades. By leveraging a large number of low-cost passive reflecting elements, RISs introduce a phase-shift in the impinging signal to create a favorable propagation channel between the transmitter and the receiver. In this article, we provide a tutorial overview of RISs for sixth-generation (6G) wireless networks. Specifically, we present a comprehensive discussion on performance gains that can be achieved by integrating RISs with emerging communication technologies. We address the practical implementation of RIS-assisted networks and expose the crucial challenges, including the RIS reconfiguration, deployment and size optimization, and channel estimation. Furthermore, we explore the integration of RIS and non-orthogonal multiple access (NOMA) under imperfect channel state information (CSI). Our numerical results illustrate the importance of better channel estimation in RIS-assisted networks and indicate the various factors that impact the size of RIS. Finally, we present promising future research directions for realizing RIS-assisted networks in 6G communication. IEEE
In this paper, we investigate the reconfigurable intelligent surface (RIS)-assisted non-orthogonal multiple access-based backscatter communication (BAC-NOMA) system under Nakagami-m fading channels and element-splitting protocol. To evaluate the system performance, we first approximate the composite channel gain, i.e., the product of the forward and backscatter channel gains, as a Gamma random variable via the central limit theorem (CLT) and method of moments (MoM). Then, by leveraging the obtained results, we derive the closed-form expressions for the ergodic rates of the strong and weak backscatter nodes (BNs). To provide further insights, we conduct the asymptotic analysis in the high signal-to-noise ratio (SNR) regime. Our numerical results show an excellent correlation with the simulation results, validating our analysis, and demonstrate that the desired system performance can be achieved by adjusting the power reflection and element-splitting coefficients. Moreover, the results reveal the significant performance gain of the RIS-assisted BAC-NOMA system over the conventional BAC-NOMA system.
While targeting the energy-efficient connectivity of the Internet-of-things (IoT) devices in the sixth-generation (6G) networks, in this paper, we explore the integration of non-orthogonal multiple access-based backscatter communication (BAC-NOMA) and simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). To this end, first, for the performance evaluation of the STAR-RIS-assisted BAC-NOMA system, we derive the statistical distribution of the channels under Nakagami-m fading. Second, by leveraging the derived statistical channel distributions, we present the effective capacity analysis under the delay quality-of-service (QoS) constraint. In particular, we derive the closed-form expressions for the effective capacity of the reflecting and transmitting backscatter nodes (BSNs) under the energy-splitting protocol of STAR-RIS. To obtain more insight into the performance of the considered system, we provide the asymptotic analysis, and derive the upper bound on the effective capacity, which represents the ergodic capacity. Our simulation results validate the analytical analysis, and reveal the effectiveness of the STAR-RIS-assisted BAC-NOMA system over the conventional RIS (C-RIS)- and orthogonal multiple access (OMA)-based counterparts. Finally, to highlight the trade-off between the effective capacity and energy consumption, we analyze the link-layer energy efficiency. Overall, this paper provides useful guidelines for the performance analysis and design of the STAR-RIS-assisted BAC-NOMA systems.
Targeting the delay-constrained Internet-of-Things (IoT) applications in sixth-generation (6G) networks, in this paper, we study the integration of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) and non-orthogonal multiple access-based backscatter communication (BAC-NOMA) under statistical delay quality-of-service (QoS) requirements. In particular, we derive the closed-form expressions for the effective capacity of the STAR-RIS assisted BAC-NOMA system under Nakagami-m fading channels and energy-splitting protocol of STAR-RIS. Our simulation results demonstrate the effectiveness of STAR-RIS over the conventional RIS (C-RIS) and show an excellent correlation with analytical results, validating our analysis. The results reveal that the stringent QoS constraint degrades the effective capacity; however, the system performance can be improved by increasing the STAR-RIS elements and adjusting the energy-splitting coefficients. Finally, we determine the optimal pair of power reflection coefficients subject to the per-BSN effective capacity requirements.
Backscatter Communication (BackCom), which is based on passive reflection and modulation of an incident radio-frequency (RF) wave, has emerged as a cutting-edge technological paradigm for self-sustainable Internet-of-things (IoT). Nevertheless, contemporary BackCom systems are limited to short-range and low data rate applications only, rendering them insufficient on their own to support pervasive connectivity among the massive number of IoT devices. Meanwhile, wireless networks are rapidly evolving toward the smart radio paradigm. In this regard, reconfigurable intelligent surfaces (RISs) have come to the forefront to transform the wireless propagation environment into a fully controllable and customizable space in a cost-effective and energy-efficient manner. Targeting the sixth-generation (6G) horizon, we anticipate the integration of RISs into BackCom systems as a new frontier for enabling 6G IoT networks. In this article, for the first time in the open literature, we provide a tutorial overview of RIS-assisted BackCom (RIS-BackCom) systems. Specifically, we introduce the three different variants of RIS-Back- Com and identify the potential improvements that can be achieved by incorporating RISs into Back- Com systems. In addition, owing to the unrivaled effectiveness of non-orthogonal multiple access (NOMA), we present a case study on a RIS-assisted NOMA-enhanced BackCom system. Finally, we outline the way forward for translating this disruptive concept into real-world applications.
Industry is going through a transformation phase, enabling automation and data exchange in manufacturing technologies and processes, and this transformation is called Industry 4.0. Industrial Internet-of-Things (IIoT) applications require real-time processing, near-by storage, ultra-low latency, reliability and high data rate, all of which can be satisfied by fog computing architecture. With smart devices expected to grow exponentially, the need for an optimized fog computing architecture and protocols is crucial. Therein, efficient, intelligent and decentralized solutions are required to ensure real-time connectivity, reliability and green communication. In this paper, we provide a comprehensive review of methods and techniques in fog computing. Our focus is on fog infrastructure and protocols in the context of IIoT applications. This article has two main research areas: In the first half, we discuss the history of industrial revolution, application areas of IIoT followed by key enabling technologies that act as building blocks for industrial transformation. In the second half, we focus on fog computing, providing solutions to critical challenges and as an enabler for IIoT application domains. Finally, open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.
The Licensed Assisted Access (LAA) is shown asa required technology to avoid overcrowding of the licensedbands by the increasing cellular traffic. Proposed by 3GPP,LAA uses a Listen Before Talk (LBT) and backoff mechanismsimilar to Wi-Fi. While many mathematical models have beenproposed to study the problem of the coexistence of LAAand Wi-Fi systems, few have tackled the problem of QoSprovisioning, and in particular analysed the behaviour of thevarious classes of priority available in Wi-Fi and LAA. Thispaper presents a new mathematical model to investigate theperformance of different priority classes in coexisting Wi-Fi andLAA networks. Using Discrete Time Markov Chains, we modelthe saturation throughput of all eight priority classes used byWi-Fi and LAA. The numerical results show that with the 3GPPproposed parameters, a fair coexistence between Wi-Fi and LAAcannot be achieved. Wi-Fi users in particular suffer a significantdegradation of their performance caused by the collision withLAA transmissions which has a longer duration compared toWi-Fi transmissions.
Machine to Machine (M2M) communication networksare expected to connect a large number of power constrained devices in long range applications with differentquality of service (QoS) requirements. Medium access control with QoS support such as the Enhanced Distributed Channel Access (EDCA) defined by IEEE 802.11e provides traffic differentiation and corresponding priority classes, which guarantees QoSaccording to the needs of applications. In this paper, we employa station grouping mechanism for enhancing the scalability of EDCA to handle the massive number of access attempts expected in large M2M networks. Furthermore, we develop a discrete time Markov chain (DTMC) model to analyze the performance of EDCA with station grouping. Using the developed DTMC model, we calculate throughput for each access category as well as for different combinations of grouping and EDCA parameters. Thenumerical results show that the model can precisely reveal the behavior of EDCA mechanism. Moreover, it is demonstrated that employing the proposed grouping mechanism for EDCA increasesthe normalized throughput significantly for all classes of priority.
Many new narrowband low-power wide-area networks (LPWANs) (e.g., LoRaWAN, Sigfox) have opted to use pure ALOHA-like access for its reduced control overhead and asynchronous transmissions. Although asynchronous access reduces the energy consumption of IoT devices, the network performance suffers from high intra-network interference in dense deployments. Contrarily, adopting synchronous access can improve throughput and fairness, however, it requires time synchronization. Unfortunately, maintaining synchronization over the narrowband LPWANs wastes channel time and transmission opportunities. In this paper, we propose the use of out-of-band time-dissemination to relatively synchronize the LoRa devices and thereby facilitate resource-efficient slotted uplink communication. In this respect, we conceptualize and analyze a co-designed synchronization and random access communication mechanism that can effectively exploit technologies providing limited time accuracy, such as FM radio data system (FM-RDS). While considering the LoRa-specific parameters, we derive the throughput of the proposed mechanism, compare it to a generic synchronous random access using in-band synchronization, and design the communication parameters under time uncertainty. We scrutinize the transmission time uncertainty of a device by introducing a clock error model that accounts for the errors in the synchronization source, local clock, propagation delay, and transceiver’s transmission time uncertainty. We characterize the time uncertainty of FM-RDS with hardware measurements and perform simulations to evaluate the proposed solution. The results, presented in terms of success probability, throughput, and fairness for a single-cell scenario, suggest that FM-RDS, despite its poor absolute synchronization, can be used effectively to realize time-slotted communication in LoRa with performance similar to that of more accurate time-dissemination technologies.
As the market for low-power wide-area network (LPWAN) technologies expands and the number of connected devices increases, it is becoming important to investigate the performance of LPWAN candidate technologies in dense deployment scenarios. In dense deployments, where the networks usually exhibit the traits of an interference-limited system, a detailed intra- and inter-cell interference analysis of LPWANs is required. In this paper, we model and analyze the performance of uplink communication of a LoRa link in a multi-cell LoRa system. To such end, we use mathematical tools from stochastic geometry and geometric probability to model the spatial distribution of LoRa devices. The model captures the effects of the density of LoRa cells and the allocation of quasi-orthogonal spreading factors (SF) on the success probability of the LoRa transmissions. To account for practical deployment of LoRa gateways, we model the spatial distribution of the gateways with a Poisson point process (PPP) and Matèrn hard-core point process (MHC). Using our analytical formulation, we find the uplink performance in terms of success probability and potential throughput for each of the available SF in LoRa’s physical layer. Our results show that in dense multi-cell LoRa deployment with uplink traffic, the intercell interference noticeably degrades the system performance.
We present a stochastic geometry-based model to investigate alternative medium access choices for LoRaWAN a widely adopted low-power wide-area networking (LPWAN) technology for the Internet-of-things (IoT). LoRaWAN adoption is driven by its simplified network architecture, air interface, and medium access. The physical layer, known as LoRa, provides quasi-orthogonal virtual channels through spreading factors (SFs) and time-power capture gains. However, the adopted pure ALOHA access mechanism suffers, in terms of scalability, under the same-channel same-SF transmissions from a large number of devices. In this paper, our objective is to explore access mechanisms beyond-ALOHA for LoRaWAN. Using recent results on time- and power-capture effects of LoRa, we develop a unified model for the comparative study of other choices, i.e., slotted ALOHA and carrier-sense multiple access (CSMA). The model includes the necessary design parameters of these access mechanisms, such as guard time and synchronization accuracy for slotted ALOHA, carrier sensing threshold for CSMA. It also accounts for the spatial interaction of devices in annular shaped regions, characteristic of LoRa, for CSMA. The performance derived from the model in terms of coverage probability, throughput, and energy efficiency are validated using Monte-Carlo simulations. Our analysis shows that slotted ALOHA indeed has higher reliability than pure ALOHA but at the cost of lower energy efficiency for low device densities. Whereas, CSMA outperforms slotted ALOHA at smaller SFs in terms of reliability and energy efficiency, with its performance degrading to pure ALOHA at higher SFs.
Although the idea of using wireless links for covering large areas is not new, the advent of Low Power Wide area networks (LPWANs) has recently started changing the game. Simple, robust, narrowband modulation schemes permit the implementation of low-cost radio devices offering high receiver sensitivity, thus improving the overall link budget. The several technologies belonging to the LPWAN family, including the well-known LoRaWAN solution, provide a cost-effective answer to many Internet-of-things (IoT) applications, requiring wireless communication capable of supporting large networks of many devices (e.g., smart metering). Generally, the adopted medium access control (MAC) strategy is based on pure ALOHA, which, among other things, allows to minimize the traffic overhead under constrained duty cycle limitations of the unlicensed bands. Unfortunately, ALOHA suffers from poor scalability, rapidly collapsing in dense networks. This work investigates the design of an improved LoRaWAN MAC scheme based on slotted ALOHA. In particular, the required time dissemination is provided by out-of-band communications leveraging on Radio Data System(FM-RDS) broadcasting, which natively covers wide areas both indoor and outdoor. An experimental setup based on low-cost hardware is used to characterize the obtainable synchronization performance and derive a timing error model. Consequently, improvements in success probability and energy efficiency have been validated by means of simulations in very large networks with up to 10000 nodes. It is shown that the advantage of the proposed scheme over conventional LoRaWAN communication is up to 100% when short update time and large payload are required. Similar results are obtained regarding the energy efficiency improvement, that is close to 100% for relatively short transmission intervals and long message duration; however, due to the additional overhead for listening the time dissemination messages, efficiency gain can be negative for very short duration of message fastly repeating.
Many applications for machine-to-machine (M2M) communications are characterized by large numbers of devices with sporadic transmissions and subjected to low energy budgets. This work addresses the importance of energy consumption by proposing a new Medium Access Control (MAC) mechanism for improving the energy efficiency of IEEE 802.11ah, a standard targeting M2M communication. We propose to use the features of IEEE 802.11ah MAC to realize a hybrid contention-reservation mechanism for the transmission of uplink traffic. In the proposed mechanism, any device with a buffered packet will first notify the Access Point (AP) during a contention phase before being given a reserved timeslot for the data transmission. We develop a mathematical model to analyse the energy consumption ofthe proposed mechanism and of IEEE 802.11ah. The results show that for a monitoring scenario, the proposed contention reservation mechanism reduces the energy consumption for a successful uplink data transmission by up to 55%.
Wireless sensors and actuators networks are an essential element to realize industrial IoT (IIoT) systems, yet their diffusion is hampered by the complexity of ensuring reliable communication in industrial environments.A significant problem with that respect is the unpredictable fluctuation of a radio-link between the line-of-sight (LoS) and the non-line-of-sight (NLoS) state due to time-varying environments.The impact of link-state over reception performance, suggests that link-state variations should be monitored at run-time, enabling dynamic adaptation of the transmission scheme on a link-basis to safeguard QoS.Starting from the assumption that accurate channel-sounding is unsuitable for low-complexity IIoT devices, we investigate the feasibility of channel-state identification for platforms with limited sensing capabilities. In this context, we evaluate the performance of different supervised-learning algorithms with variable complexity for the inference of the radio-link state.Our approach provides fast link-diagnostics by performing online classification based on a single received packet. Furthermore, the method takes into account the effects of limited sampling frequency, bit-depth, and moving average filtering, which are typical to hardware-constrained platforms.The results of an experimental campaign in both industrial and office environments show promising classification accuracy of LoS/NLoS radio links. Additional tests indicate that the proposed method retains good performance even with low-resolution RSSI-samples available in low-cost WSN nodes, which facilitates its adoption in real IIoT networks.
The Internet of Things (IoT) paradigm contaminated industrial world, allowing for innovative services. The wireless communications seem to be particularly attracting, especially when complement indoor and outdoor Real Time Location Systems (RTLS) for geo-referencing smart objects (e.g. for asset tracking). In this paper, the LoRaWAN solution is considered for transmitting RTLS data. LoRaWAN is an example of Low Power Wide Area Network: it tradeoffs throughput with coverage and power consumption. However, performance can be greatly improved with limited changes to the standard specifications. In this work, a scheduling layer is suggested above the regular stack for allocating communication resources in a time slot channel hopping medium access strategy. The main innovation is the time synchronization, which is obtained opportunistically from the ranging devices belonging to the RTLSs. The experimental testbed, based on commercially available solutions, demonstrates the affordability and feasibility of the proposed approach. When low-cost GPS (outdoor) and UWB (indoor) ranging devices are considered, synchronization error of few microseconds can be easily obtained. The experimental results show the that time reference pulses disciplined by GPS have a maximum jitter of 180 ns and a standard deviation of 40 ns whereas, if time reference pulses disciplined by UWB are considered, the maximum jitter is 3.3 μs and the standard deviation is 0.7 μs.
Internet of Things (IoT) is in the booming age of its growth, therefore a vast amount of applications, projects, hardware/software solutions, and customized concepts are being developed. The proliferation of IoT will enable location-based services to be available everywhere for everyone, and this will raise a large number of privacy issues related to the collection, usage, retention, and disclosure of the user’s location information. In order to provide a solution to this unique problem of IoT, this paper proposes Location Privacy Assured Internet of Things (LPA-IoT) scheme, which uses the concepts of Mix-Zone, location-obfuscation along with context-awareness. To the authors’ best knowledge, the proposed LPA-IoT scheme is the first location-based privacy-preserving scheme for IoT that provides flexible privacy levels associated with the present context of the user.
LoRa and the LoRaWAN specification is a technology for Low Power Wide Area Networks (LPWAN) designed to allow connectivity for connected objects, such as remote sensors. Several previous works revealed various weaknesses regarding the security of LoRaWAN v1.0 (the official 1st draft) and these led to improvements included in LoRaWAN v1.1, released on Oct 11, 2017. In this work, we provide the first look into the security of LoRaWAN v1.1. We present an overview of the protocol and, importantly, present several threats to this new version of the protocol. Besides, we propose our own ramification strategies for the mentioned threats, to be used in developing next version of LoRaWAN. The threats presented were not previously discussed, they are possible even within the security assumptions of the specification and are relevant for practitioners implementing LoRa-based applications as well researchers and the future evolution of the LoRaWAN specification.
LoRa (along with its upper layers definition—LoRaWAN) is one of the most promising Low Power Wide Area Network (LPWAN) technologies for implementing Internet of Things (IoT)-based applications. Although being a popular technology, several works in the literature have revealed vulnerabilities and risks regarding the security of LoRaWAN v1.0 (the official 1st specification draft). The LoRa-Alliance has built upon these findings and introduced several improvements in the security and architecture of LoRa. The result of these efforts resulted in LoRaWAN v1.1, released on 11 October 2017. This work aims at reviewing and clarifying the security aspects of LoRaWAN v1.1. By following ETSI guidelines, we provide a comprehensive Security Risk Analysisof the protocol and discuss several remedies to the security risks described. A threat catalog is presented, along with discussions and analysis in view of the scale, impact, and likelihood of each threat. To the best of the authors’ knowledge, this work is one of the first of its kind, by providing a detailed security risk analysis related to the latest version of LoRaWAN. Our analysis highlights important practical threats, such as end-device physical capture, rogue gateway and self-replay, which require particular attention by developers and organizations implementing LoRa networks.
The trending technological research platform is Internet of Things (IoT)and most probably it will stay that way for a while. One of the main application areas of IoT is Cyber-Physical Systems (CPSs), in which IoT devices can be leveraged as actuators and sensors in accordance with the system needs. The public acceptance and adoption of CPS services and applications will create a huge amount of privacy issues related to the processing, storage and disclosure of the user location information. As a remedy, our paper proposes a methodology to provide location privacy for the users of CPSs. Our proposal takes advantage of concepts such as mix-zone, context-awareness, and location-obfuscation. According to our best knowledge, the proposed methodology is the first privacy-preserving location service for CPSs that offers adaptable privacy levels related to the current context of the user.
Cellular networks are becoming increasingly complex, requiring careful optimization of parameters such as antenna propagation pattern, tilt, direction, height, and transmitted reference signal power to ensure a high-quality user experience. In this paper, we propose a new method to optimize antenna direction in a cellular network using Q-learning. Our approach involves utilizing the open-source quasi-deterministic radio channel generator to generate radio frequency (RF) power maps for various antenna configurations. We then implement a Q-learning algorithm to learn the optimal antenna directions that maximize the signal-to-interference-plus-noise ratio (SINR) across the coverage area. The learning process takes place in the constructed open-source OpenAI Gym environment associated with the antenna configuration. Our tests demonstrate that the proposed Q-learning-based method outperforms random exhaustive search methods and can effectively improve the performance of cellular networks while enhancing the quality of experience (QoE) for end users.
LoRa (Long Range) technology, with great success in providing coverage for massive Internet-of-things (IoT) deployments, is recently being considered to complement the terrestrial networks with Low Earth Orbit (LEO) satellite connectivity. The objective is to extend coverage to remote areas for various verticals, such as logistics, asset tracking, transportation, utilities, agriculture, and maritime. However, only limited studies have realistically evaluated the effects of ground-to-satellite links due to the high cost of traditional tools and methods to emulate the radio channel. In this paper, compared to an expensive channel emulator, we propose and develop an alternative method for the experimental study of LoRa satellite links using lower-cost software defined radio (SDR). Since the working details of LoRa modulation are limited to the reverse-engineered imitations, we employ such a version on SDR platform and add easily controllable adverse channel effects to evaluate LoRa for satellite connectivity. In our work, the emulation of the Doppler effect is considered as a key aspect for testing the reliability of LoRa satellite links. Therefore, after demonstrating the correctness of the (ideal) L oRa transceiver implementation, achieving a low packet error ratio (PER) with a commercial L oRa receiver, the baseband signal is distorted to emulate the Doppler effect, mimicking a real LoRa satellite communication. The Doppler effect is related to time-on-air (ToA), bounded to communication parameters and orbit height. Higher ToAs and lower orbits decrease the link duration, mainly because of dynamic Doppler effect.
Due to the time-varying characteristics of energy harvesting sources, it is a challenge for energy harvesting to provide stable energy output. In this paper, the time fair energy allocation (TFEA) problem is investigated, and an utility maximization framework is proposed to guarantee both time fairness and energy efficiency of energy allocation. Then we propose a prediction based energy allocation scheme. First, a deep learning predictor is used to predict the harvested energy. Second, we transform the TFEA problem into an Euclidean shortest path problem and propose a fast time fair energy allocation algorithm (FTF) based on inflection points search. Our algorithm can significantly decrease the iteration number of the shortest path search and reduce the computation time. In addition, we propose an edge computing assisted energy allocation framework, in which the computing tasks are offloaded to edge gateways. The proposed scheme is evaluated in the scenario of metro vehicles health monitoring. Experiment results show that the time consumption of FTF is at least 92.2% lower than traditional algorithms, while the time fairness of FTF is the best. The total time cost and energy cost of our edge computing scheme is also competitive compared with traditional local computing schemes.
The provision of quality of service for Wireless Sensor Networks is more relevant than ever now where wireless solutions with their flexibility advantages are considered for the extension/substitution of wired networks for a multitude of industrial applications. Scheduling algorithms that give end-to-end guarantees for both reliability and latency exist, but according to recent investigations is the achieved quality of service insufficient for most control applications. Data aggregation is an effective tool to significantly improve on end-to-end contention and energy efficiency compared to single packet transmissions. In practice, though, it is not extensively used for process data processing on the MAC layer. In this paper, we outline the challenges for the use of data aggregation in Industrial Wireless Sensor Networks. We further extend SchedEx, a reliability-aware scheduling algorithm extension, for packet aggregation. Our simulations for scheduling algorithms from the literature show its great potential for industrial applications. Features for the inclusion of data aggregation into industrial standards such as WirelessHART are suggested, and remaining open issues for future work are presented and discussed.
We propose novel strategies for end-to-end reliability-aware scheduling in Industrial Wireless Sensor Networks (IWSNs). Becauseof stringent reliability requirements in industrial applications where missed packets may have disastrous or lethal consequences,all IWSN communication standards are based on Time Division Multiple Access (TDMA), allowing for deterministic channelaccess on the MAC layer. We therefore extend an existing generic and scalable reliability-aware scheduling approach by the name ofSchedEx. SchedEx has proven to quickly produce TDMA schedules that guarantee a user-defined end-to-end reliability level 𝜌 for allmultihop communication in a WSN. Moreover, SchedEx executes orders of magnitude faster than recent algorithms in the literaturewhile producing schedules with competitive latencies. We generalize the original problem formulation from single-channel tomultichannel scheduling and propose a scalable integration into the existing SchedEx approach. We further introduce a noveloptimal bound that produces TDMA schedules with latencies around 20% shorter than the original SchedEx algorithm. Combiningthe novel strategies with multiple sinks, multiple channels, and the introduced optimal bound, we could through simulations verifylatency improvements by almost an order of magnitude, reducing the TDMA superframe execution times from tens of seconds toseconds only, which allows for a utilization of SchedEx for many time-critical control applications.
One of the biggest obstacles for a broad deploymentof Wireless Sensor Networks for industrial applications is the dif-ficulty to ensure end-to-end reliability guarantees while providingas tight latency guarantees as possible. In response, we proposea novel centralized optimization framework for Wireless SensorNetworks that identifies TDMA schedules and routing combi-nations in an integrated manner. The framework is shown toguarantee end-to-end reliability for all events send in a schedulingframe while minimizing the delay of all packet transmissions. Itcan further be applied using alternative Quality of Service ob-jectives and constraints including energy efficiency and fairness.We consider network settings with multiple channels, multiplesinks, and stringent reliability constraints for data collectingflows. We compare the results to those achieved by the onlyscalable reliability-aware TDMA scheduling algorithm to ourknowledge, SchedEx, which conducts scheduling only. By makingrouting part of the problem and by introducing the conceptof source-aware routing, we achieve latency improvements forall topologies, with a notable average improvement of up to31percent.
Wireless Sensor Networks (WSN) are gaining popularity as a flexible and economical alternative to field-bus installations for monitoring and control applications. For missioncritical applications, communication networks must provide endto- end reliability guarantees, posing substantial challenges for WSN. Reliability can be improved by redundancy, and is often addressed on the MAC layer by re-submission of lost packets, usually applying slotted scheduling. Recently, researchers have proposed a strategy to optimally improve the reliability of a given schedule by repeating the most rewarding slots in a schedule incrementally until a deadline. This Incrementer can be used with most scheduling algorithms but has scalability issues which narrows its usability to offline calculations of schedules, for networks that are rather static. In this paper, we introduce SchedEx, a generic heuristic scheduling algorithm extension which guarantees a user-defined end-to-end reliability. SchedEx produces competitive schedules to the existing approach, and it does that consistently more than an order of magnitude faster. The harsher the end-to-end reliability demand of the network, the better SchedEx performs compared to the Incrementer. We further show that SchedEx has a more evenly distributed improvement impact on the scheduling algorithms, whereas the Incrementer favors schedules created by certain scheduling algorithms.
Wireless sensor networks (WSN) must ensure worst-case end-to-end delay and reliability guarantees for mission-critical applications.TDMA-based scheduling offers delay guarantees, thus it is used in industrial monitoring and automation. We propose to evolve pairs of TDMA schedule and routing-tree in a cross-layer in order to fulfill multiple conflicting QoS requirements,exemplified by latency and reliability.The genetic algorithm we utilize can be used as an analytical tool for both the feasibility and expected QoS in production. Near-optimal cross-layer solutions are found within seconds and can be directly enforced into the network.