Wireless use cases in industrial internet-of-thing (IIoT) networks often require guaranteed data rates ranging from a few kilobits per second to a few gigabits per second. Supporting such a requirement in a single radio access technique is difficult, especially when bandwidth is limited. Although non-orthogonal multiple access (NOMA) can improve the system capacity by simultaneously serving multiple devices, its performance suffers from strong user interference. In this paper, we propose a Q-learning-based algorithm for handling many-to-many matching problems such as bandwidth partitioning, device assignment to sub-bands, interference-aware access mode selection (orthogonal multiple access (OMA), or NOMA), and power allocation to each device. The learning technique maximizes system throughput and spectral efficiency (SE) while maintaining quality-of-service (QoS) for a maximum number of devices. The simulation results show that the proposed technique can significantly increase overall system throughput and SE while meeting heterogeneous QoS criteria.
In this work, we consider the design of a radio resource management (RRM) solution for traffic steering (TS) use-case in the open radio access network (O-RAN). The O-RAN TS deals with the quality-of-service (QoS)-aware steering of the traffic by connectivity management (e.g., device-to-cell association, radio spectrum, and power allocation) for emerging heterogeneous networks (HetNets) in 5G-and-beyond systems. However, TS in HetNets is a complex problem in terms of efficiently assigning/utilizing the radio resources while satisfying the diverse QoS requirements of especially the cell-edge users due to their poor signal-to-interference-plus-noise ratio (SINR). In this respect, we propose an intelligent non-orthogonal multiple access (NOMA)-based RRM technique for a small cell base station (SBS) within a macro gNB. A Q-learning-assisted algorithm is designed to allocate the transmit power and frequency sub-bands at the O-RAN control layer such that interference from macro gNB to SBS devices is minimized while ensuring the QoS of the maximum number of devices. The numerical results show that the proposed method enhances the overall spectral efficiency of the NOMA-based TS use case without adding to the system's complexity or cost compared to traditional HetNet topologies such as co-channel deployments and dedicated channel deployments.
This paper proposes a novel partial non-orthogonal multiple access (P-NOMA)-based semi-integrated sensing and communication (ISaC) system design. As an example ISaC scenario, we consider a vehicle simultaneously receiving the communication signal from infrastructure-to-vehicle (I2V) and sensing signal from vehicle-to-vehicle (V2V). P-NOMA allows exploiting both the orthogonal multiple access (OMA) and NOMA schemes for interference reduction and spectral efficiency (SE) enhancement while providing the flexibility of controlling the overlap of the sensing and communication signals according to the channel conditions and priority of the sensing and communication tasks. In this respect, we derive the closed-form expressions for communication outage probability and sensing probability of detection in Nakagami-m fading by considering the interference from the composite sensing channel. Our extensive analysis allows capturing the performance trade-offs of the communication and the sensing tasks with respect to various system parameters such as overlapping partial NOMA parameter, target range, radar cross section (RCS), and parameter m of the Nakagami-m fading channel. Our results show that the proposed P-NOMA-based semi-ISaC system outperforms the benchmark OMA-and NOMA-based systems in terms of communication spectral efficiency and probability of detection for the sensing target.
The development of the fifth generation (5G) of cellular systems enables the realization of densely connected, seamlessly integrated, and heterogeneous device networks. While 5G systems were developed to support the Internet of Everything (IoE) paradigm of communication, their mass-scale implementations have excessive capital deployment costs and severely detrimental environmental impacts. Hence, these systems are not feasibly scalable for the envisioned real-time, high-rate, high-reliability, and low-latency requirements of connected consumer, commercial, industrial, healthcare, and environmental processes of the IoE network. The IoE vision is expected to support 30 billion devices by 2030, hence, green communication architectures are critical for the development of next-generation wireless systems. In this context, intelligent reflecting surfaces (IRS) have emerged as a promising disruptive technological advancement that can adjust wireless environments in an energy-efficient manner. This work utilizes and analyzes a multi-node distributed IRS-assisted system in variable channel conditions and resource availability. We then employ machine learning and optimization algorithms for efficient resource allocation and system design of a distributed IRS-enabled industrial Internet of Things (IoT) network. The results show that the proposed data-driven solution is a promising optimization architecture for high-rate, next-generation IoE applications.
Localization has gained great attention in recent years, where different technologies have been utilized to achieve high positioning accuracy. Fingerprinting is a common technique for indoor positioning using short-range radio frequency (RF) technologies such as Bluetooth Low Energy (BLE). In this paper, we investigate the suitability of LoRa (Long Range) technology to implement a positioning system using received signal strength indicator (RSSI) fingerprinting. We test in real line-of-sight (LOS) and non-LOS (NLOS) environments to determine appropriate LoRa packet specifications for an accurate RSSI-to-distance mapping function. To further improve the positioning accuracy, we consider the environmental context. Extensive experiments are conducted to examine the performance of LoRa at different spreading factors. We analyze the path loss exponent and the standard deviation of shadowing in each environment
The exponential growth in global mobile data traffic, especially with regards to the massive deployment of devices envisioned for the fifth generation (5G) mobile networks, has given impetus to exploring new spectrum opportunities to support the new traffic demands. The millimeter wave (mmWave) frequency band is considered as a potential candidate for alleviating the spectrum scarcity. Moreover, the concept of multi-tier networks has gained popularity, especially for dense network environments. In this article, we deviate from the conventional multi-tier networks and employ the concept of control-data separation architecture (CDSA), which comprises of a control base station (CBS) overlaying the data base station (DBS). We assume that the CBS operates on the sub-6 GHz single band, while the DBS possesses a dual-band mmWave capability, i.e., 26 GHz unlicensed band and 60 GHz licensed band. We formulate a multi-objective optimization (MOO) problem, which jointly optimizes conflicting objectives: the spectral efficiency (SE) and the energy efficiency (EE). The unique aspect of this work includes the analysis of a joint radio resource allocation algorithm based on Lagrangian Dual Decomposition (LDD) and we compare the proposed algorithm with the maximal-rate (maxRx), dynamic sub-carrier allocation (DSA) and joint power and rate adaptation (JPRA) algorithms to show the performance gains achieved by the proposed algorithm.
With the growing popularity of Internet-of-Things (IoT)-based smart city applications, various long-range and low-power wireless connectivity solutions are under rigorous research. LoRa is one such solution that works in the sub-GHz unlicensed spectrum and promises to provide long-range communication with minimal energy consumption. However, the conventional LoRa networks are single-hop, with the end devices connected to a central gateway through a direct link, which may be subject to large path loss and hence render low connectivity and coverage. This article motivates the use of multi-hop LoRa topologies to enable energy-efficient connectivity in smart city applications. We present a case study that experimentally evaluates and compares single-hop and multi-hop LoRa topologies in terms of range extension and energy efficiency by evaluating packet reception ratio (PRR) for various source to destination distances, spreading factors (SFs), and transmission powers. The results highlight that a multi-hop LoRa network configuration can save significant energy and enhance coverage. For instance, it is shown that to achieve a 90% PRR, a two-hop network provides 50% energy savings as compared to a single-hop network while increasing 35% coverage at a particular SF. In the end, we discuss open challenges in multi-hop LoRa deployment and optimization.
Reconfigurable intelligent surfaces (RISs), with the potential to realize smart radio environments, have emerged as an energy-efficient and a cost-effective technology to support the services and demands foreseen for coming decades. By leveraging a large number of low-cost passive reflecting elements, RISs introduce a phase-shift in the impinging signal to create a favorable propagation channel between the transmitter and the receiver. In this article, we provide a tutorial overview of RISs for sixth-generation (6G) wireless networks. Specifically, we present a comprehensive discussion on performance gains that can be achieved by integrating RISs with emerging communication technologies. We address the practical implementation of RIS-assisted networks and expose the crucial challenges, including the RIS reconfiguration, deployment and size optimization, and channel estimation. Furthermore, we explore the integration of RIS and non-orthogonal multiple access (NOMA) under imperfect channel state information (CSI). Our numerical results illustrate the importance of better channel estimation in RIS-assisted networks and indicate the various factors that impact the size of RIS. Finally, we present promising future research directions for realizing RIS-assisted networks in 6G communication. IEEE
In this paper, we investigate the reconfigurable intelligent surface (RIS)-assisted non-orthogonal multiple access-based backscatter communication (BAC-NOMA) system under Nakagami-m fading channels and element-splitting protocol. To evaluate the system performance, we first approximate the composite channel gain, i.e., the product of the forward and backscatter channel gains, as a Gamma random variable via the central limit theorem (CLT) and method of moments (MoM). Then, by leveraging the obtained results, we derive the closed-form expressions for the ergodic rates of the strong and weak backscatter nodes (BNs). To provide further insights, we conduct the asymptotic analysis in the high signal-to-noise ratio (SNR) regime. Our numerical results show an excellent correlation with the simulation results, validating our analysis, and demonstrate that the desired system performance can be achieved by adjusting the power reflection and element-splitting coefficients. Moreover, the results reveal the significant performance gain of the RIS-assisted BAC-NOMA system over the conventional BAC-NOMA system.
While targeting the energy-efficient connectivity of the Internet-of-things (IoT) devices in the sixth-generation (6G) networks, in this paper, we explore the integration of non-orthogonal multiple access-based backscatter communication (BAC-NOMA) and simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). To this end, first, for the performance evaluation of the STAR-RIS-assisted BAC-NOMA system, we derive the statistical distribution of the channels under Nakagami-m fading. Second, by leveraging the derived statistical channel distributions, we present the effective capacity analysis under the delay quality-of-service (QoS) constraint. In particular, we derive the closed-form expressions for the effective capacity of the reflecting and transmitting backscatter nodes (BSNs) under the energy-splitting protocol of STAR-RIS. To obtain more insight into the performance of the considered system, we provide the asymptotic analysis, and derive the upper bound on the effective capacity, which represents the ergodic capacity. Our simulation results validate the analytical analysis, and reveal the effectiveness of the STAR-RIS-assisted BAC-NOMA system over the conventional RIS (C-RIS)- and orthogonal multiple access (OMA)-based counterparts. Finally, to highlight the trade-off between the effective capacity and energy consumption, we analyze the link-layer energy efficiency. Overall, this paper provides useful guidelines for the performance analysis and design of the STAR-RIS-assisted BAC-NOMA systems.
Targeting the delay-constrained Internet-of-Things (IoT) applications in sixth-generation (6G) networks, in this paper, we study the integration of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) and non-orthogonal multiple access-based backscatter communication (BAC-NOMA) under statistical delay quality-of-service (QoS) requirements. In particular, we derive the closed-form expressions for the effective capacity of the STAR-RIS assisted BAC-NOMA system under Nakagami-m fading channels and energy-splitting protocol of STAR-RIS. Our simulation results demonstrate the effectiveness of STAR-RIS over the conventional RIS (C-RIS) and show an excellent correlation with analytical results, validating our analysis. The results reveal that the stringent QoS constraint degrades the effective capacity; however, the system performance can be improved by increasing the STAR-RIS elements and adjusting the energy-splitting coefficients. Finally, we determine the optimal pair of power reflection coefficients subject to the per-BSN effective capacity requirements.
Backscatter Communication (BackCom), which is based on passive reflection and modulation of an incident radio-frequency (RF) wave, has emerged as a cutting-edge technological paradigm for self-sustainable Internet-of-things (IoT). Nevertheless, contemporary BackCom systems are limited to short-range and low data rate applications only, rendering them insufficient on their own to support pervasive connectivity among the massive number of IoT devices. Meanwhile, wireless networks are rapidly evolving toward the smart radio paradigm. In this regard, reconfigurable intelligent surfaces (RISs) have come to the forefront to transform the wireless propagation environment into a fully controllable and customizable space in a cost-effective and energy-efficient manner. Targeting the sixth-generation (6G) horizon, we anticipate the integration of RISs into BackCom systems as a new frontier for enabling 6G IoT networks. In this article, for the first time in the open literature, we provide a tutorial overview of RIS-assisted BackCom (RIS-BackCom) systems. Specifically, we introduce the three different variants of RIS-Back- Com and identify the potential improvements that can be achieved by incorporating RISs into Back- Com systems. In addition, owing to the unrivaled effectiveness of non-orthogonal multiple access (NOMA), we present a case study on a RIS-assisted NOMA-enhanced BackCom system. Finally, we outline the way forward for translating this disruptive concept into real-world applications.
Industry is going through a transformation phase, enabling automation and data exchange in manufacturing technologies and processes, and this transformation is called Industry 4.0. Industrial Internet-of-Things (IIoT) applications require real-time processing, near-by storage, ultra-low latency, reliability and high data rate, all of which can be satisfied by fog computing architecture. With smart devices expected to grow exponentially, the need for an optimized fog computing architecture and protocols is crucial. Therein, efficient, intelligent and decentralized solutions are required to ensure real-time connectivity, reliability and green communication. In this paper, we provide a comprehensive review of methods and techniques in fog computing. Our focus is on fog infrastructure and protocols in the context of IIoT applications. This article has two main research areas: In the first half, we discuss the history of industrial revolution, application areas of IIoT followed by key enabling technologies that act as building blocks for industrial transformation. In the second half, we focus on fog computing, providing solutions to critical challenges and as an enabler for IIoT application domains. Finally, open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.
Many new narrowband low-power wide-area networks (LPWANs) (e.g., LoRaWAN, Sigfox) have opted to use pure ALOHA-like access for its reduced control overhead and asynchronous transmissions. Although asynchronous access reduces the energy consumption of IoT devices, the network performance suffers from high intra-network interference in dense deployments. Contrarily, adopting synchronous access can improve throughput and fairness, however, it requires time synchronization. Unfortunately, maintaining synchronization over the narrowband LPWANs wastes channel time and transmission opportunities. In this paper, we propose the use of out-of-band time-dissemination to relatively synchronize the LoRa devices and thereby facilitate resource-efficient slotted uplink communication. In this respect, we conceptualize and analyze a co-designed synchronization and random access communication mechanism that can effectively exploit technologies providing limited time accuracy, such as FM radio data system (FM-RDS). While considering the LoRa-specific parameters, we derive the throughput of the proposed mechanism, compare it to a generic synchronous random access using in-band synchronization, and design the communication parameters under time uncertainty. We scrutinize the transmission time uncertainty of a device by introducing a clock error model that accounts for the errors in the synchronization source, local clock, propagation delay, and transceiver’s transmission time uncertainty. We characterize the time uncertainty of FM-RDS with hardware measurements and perform simulations to evaluate the proposed solution. The results, presented in terms of success probability, throughput, and fairness for a single-cell scenario, suggest that FM-RDS, despite its poor absolute synchronization, can be used effectively to realize time-slotted communication in LoRa with performance similar to that of more accurate time-dissemination technologies.
As the market for low-power wide-area network (LPWAN) technologies expands and the number of connected devices increases, it is becoming important to investigate the performance of LPWAN candidate technologies in dense deployment scenarios. In dense deployments, where the networks usually exhibit the traits of an interference-limited system, a detailed intra- and inter-cell interference analysis of LPWANs is required. In this paper, we model and analyze the performance of uplink communication of a LoRa link in a multi-cell LoRa system. To such end, we use mathematical tools from stochastic geometry and geometric probability to model the spatial distribution of LoRa devices. The model captures the effects of the density of LoRa cells and the allocation of quasi-orthogonal spreading factors (SF) on the success probability of the LoRa transmissions. To account for practical deployment of LoRa gateways, we model the spatial distribution of the gateways with a Poisson point process (PPP) and Matèrn hard-core point process (MHC). Using our analytical formulation, we find the uplink performance in terms of success probability and potential throughput for each of the available SF in LoRa’s physical layer. Our results show that in dense multi-cell LoRa deployment with uplink traffic, the intercell interference noticeably degrades the system performance.
We present a stochastic geometry-based model to investigate alternative medium access choices for LoRaWAN a widely adopted low-power wide-area networking (LPWAN) technology for the Internet-of-things (IoT). LoRaWAN adoption is driven by its simplified network architecture, air interface, and medium access. The physical layer, known as LoRa, provides quasi-orthogonal virtual channels through spreading factors (SFs) and time-power capture gains. However, the adopted pure ALOHA access mechanism suffers, in terms of scalability, under the same-channel same-SF transmissions from a large number of devices. In this paper, our objective is to explore access mechanisms beyond-ALOHA for LoRaWAN. Using recent results on time- and power-capture effects of LoRa, we develop a unified model for the comparative study of other choices, i.e., slotted ALOHA and carrier-sense multiple access (CSMA). The model includes the necessary design parameters of these access mechanisms, such as guard time and synchronization accuracy for slotted ALOHA, carrier sensing threshold for CSMA. It also accounts for the spatial interaction of devices in annular shaped regions, characteristic of LoRa, for CSMA. The performance derived from the model in terms of coverage probability, throughput, and energy efficiency are validated using Monte-Carlo simulations. Our analysis shows that slotted ALOHA indeed has higher reliability than pure ALOHA but at the cost of lower energy efficiency for low device densities. Whereas, CSMA outperforms slotted ALOHA at smaller SFs in terms of reliability and energy efficiency, with its performance degrading to pure ALOHA at higher SFs.
Although the idea of using wireless links for covering large areas is not new, the advent of Low Power Wide area networks (LPWANs) has recently started changing the game. Simple, robust, narrowband modulation schemes permit the implementation of low-cost radio devices offering high receiver sensitivity, thus improving the overall link budget. The several technologies belonging to the LPWAN family, including the well-known LoRaWAN solution, provide a cost-effective answer to many Internet-of-things (IoT) applications, requiring wireless communication capable of supporting large networks of many devices (e.g., smart metering). Generally, the adopted medium access control (MAC) strategy is based on pure ALOHA, which, among other things, allows to minimize the traffic overhead under constrained duty cycle limitations of the unlicensed bands. Unfortunately, ALOHA suffers from poor scalability, rapidly collapsing in dense networks. This work investigates the design of an improved LoRaWAN MAC scheme based on slotted ALOHA. In particular, the required time dissemination is provided by out-of-band communications leveraging on Radio Data System(FM-RDS) broadcasting, which natively covers wide areas both indoor and outdoor. An experimental setup based on low-cost hardware is used to characterize the obtainable synchronization performance and derive a timing error model. Consequently, improvements in success probability and energy efficiency have been validated by means of simulations in very large networks with up to 10000 nodes. It is shown that the advantage of the proposed scheme over conventional LoRaWAN communication is up to 100% when short update time and large payload are required. Similar results are obtained regarding the energy efficiency improvement, that is close to 100% for relatively short transmission intervals and long message duration; however, due to the additional overhead for listening the time dissemination messages, efficiency gain can be negative for very short duration of message fastly repeating.
Wireless sensors and actuators networks are an essential element to realize industrial IoT (IIoT) systems, yet their diffusion is hampered by the complexity of ensuring reliable communication in industrial environments.A significant problem with that respect is the unpredictable fluctuation of a radio-link between the line-of-sight (LoS) and the non-line-of-sight (NLoS) state due to time-varying environments.The impact of link-state over reception performance, suggests that link-state variations should be monitored at run-time, enabling dynamic adaptation of the transmission scheme on a link-basis to safeguard QoS.Starting from the assumption that accurate channel-sounding is unsuitable for low-complexity IIoT devices, we investigate the feasibility of channel-state identification for platforms with limited sensing capabilities. In this context, we evaluate the performance of different supervised-learning algorithms with variable complexity for the inference of the radio-link state.Our approach provides fast link-diagnostics by performing online classification based on a single received packet. Furthermore, the method takes into account the effects of limited sampling frequency, bit-depth, and moving average filtering, which are typical to hardware-constrained platforms.The results of an experimental campaign in both industrial and office environments show promising classification accuracy of LoS/NLoS radio links. Additional tests indicate that the proposed method retains good performance even with low-resolution RSSI-samples available in low-cost WSN nodes, which facilitates its adoption in real IIoT networks.
Cellular networks are becoming increasingly complex, requiring careful optimization of parameters such as antenna propagation pattern, tilt, direction, height, and transmitted reference signal power to ensure a high-quality user experience. In this paper, we propose a new method to optimize antenna direction in a cellular network using Q-learning. Our approach involves utilizing the open-source quasi-deterministic radio channel generator to generate radio frequency (RF) power maps for various antenna configurations. We then implement a Q-learning algorithm to learn the optimal antenna directions that maximize the signal-to-interference-plus-noise ratio (SINR) across the coverage area. The learning process takes place in the constructed open-source OpenAI Gym environment associated with the antenna configuration. Our tests demonstrate that the proposed Q-learning-based method outperforms random exhaustive search methods and can effectively improve the performance of cellular networks while enhancing the quality of experience (QoE) for end users.
LoRa (Long Range) technology, with great success in providing coverage for massive Internet-of-things (IoT) deployments, is recently being considered to complement the terrestrial networks with Low Earth Orbit (LEO) satellite connectivity. The objective is to extend coverage to remote areas for various verticals, such as logistics, asset tracking, transportation, utilities, agriculture, and maritime. However, only limited studies have realistically evaluated the effects of ground-to-satellite links due to the high cost of traditional tools and methods to emulate the radio channel. In this paper, compared to an expensive channel emulator, we propose and develop an alternative method for the experimental study of LoRa satellite links using lower-cost software defined radio (SDR). Since the working details of LoRa modulation are limited to the reverse-engineered imitations, we employ such a version on SDR platform and add easily controllable adverse channel effects to evaluate LoRa for satellite connectivity. In our work, the emulation of the Doppler effect is considered as a key aspect for testing the reliability of LoRa satellite links. Therefore, after demonstrating the correctness of the (ideal) L oRa transceiver implementation, achieving a low packet error ratio (PER) with a commercial L oRa receiver, the baseband signal is distorted to emulate the Doppler effect, mimicking a real LoRa satellite communication. The Doppler effect is related to time-on-air (ToA), bounded to communication parameters and orbit height. Higher ToAs and lower orbits decrease the link duration, mainly because of dynamic Doppler effect.
In this work, we design an elastic open radio access network (O-RAN) slicing for the Industrial Internet of things (IIoT). Due to the rapid spread of IoT in the industrial use-cases such as safety and mobile robot communications, the IIoT landscape has been shifted from static manufacturing processes towards dynamic manufacturing workflows (e.g., Modular Production System). But unlike IoT, IIoT poses additional challenges such as severe communication environment, network-slice resource demand variations, and on-time information update from the IIoT devices during industrial production. First, we formulate the O-RAN slicing problem for on-time industrial monitoring and control where the objective is to minimize the cost of fresh information updates (i.e., age of information (AoI)) from the IIoT devices (i.e., sensors) with the device energy consumption and O-RAN slice isolation constraints. Second, we propose the intelligent O-RAN framework based on game theory and machine learning to mitigate the problem’s complexity. We propose a two-sided distributed matching game in the O-RAN control layer that captures the IIoT channel characteristics and the IIoT service priorities to create IIoT device and small cell base station (SBS) preference lists. We then employ an actor-critic model with a deep deterministic policy gradient (DDPG) in the O-RAN service management layer to solve the resource allocation problem for optimizing the network slice configuration policy under time-varying slicing demand. Furthermore, the proposed matching game within the actor-critic model training helps to enforce the long-term policy-based guidance for resource allocation that reflects the trends of all IIoT devices and SBSs satisfactions with the assignment. Finally, the simulation results show that the proposed solution enhances the performance gain for the IIoT services by serving an average of <inline-formula><tex-math notation="LaTeX">$50\%$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$43.64\%$</tex-math></inline-formula> more IIoT devices than the baseline approaches.
A fundamental challenge in Mission-Critical Internetof Things (MC-IoT) is to provide reliable and timely deliveryof the unpredictable critical traffic. In this paper, we propose an efficient prioritized Medium Access Control (MAC) protocol for Wireless Sensor Networks (WSNs) in MC-IoT control applications. The proposed protocol utilizes a random Clear Channel Assessment (CCA)-based channel access mechanism to handlethe simultaneous transmissions of critical data and to reduce thecollision probability between the contending nodes, which in turn decreases the transmission latency. We develop a Discrete-Time Markov Chain (DTMC) model to evaluate the performance of the proposed protocol analytically in terms of the expected delay and throughput. The obtained results show that the proposed protocolcan enhance the performance of the WirelessHART standard by 80% and 190% in terms of latency and throughput, respectively along with better transmission reliability.
The industrial Internet-of-things (IIoT) paradigm is reshaping the way industrial measurement systems are designed. Industrial systems require collecting accurate and timely measurements from the field using smart sensor networks distributed in wide production areas. In this context, wireless connectivity of sensors acquires undeniable importance, and in turn, opens sig-nificant research challenges. Therefore, the research community is actively analyzing the suitability of different wireless technologies, for instance, Wi-Fi, 5G-and-beyond, and low-power wide-area networks (LPWANs), toward their possible industrial applications and optimizing them to realize high-performance and accurate smart measurement systems. In this paper, we focus on long range (LoRa)-based LPWANs (i.e., LoRaWAN), especially to overcome the duty cycle (DC) limitations of the adopted ALOHA-based medium access control (MAC) strategy in the industrial, scientific, and medical (ISM) bands. The ISM bands are subjected to an hourly constraint on the number of packet transmissions or inter-message delay, where the devices using higher spreading factors (SFs) can quickly consume the available transmission time. In this paper, we propose and assess the hybrid MAC designs in a LoRa network by combining carrier sense multiple access (CSMA) with ALOHA in two different ways i) exploiting different channel plans for the access mechanisms, ii) relay-assisted access, with devices using small SFs assisting neighboring higher-SF devices with listen-before-talk (LBT) mechanism. Our simulation results reveal that the proposed access strategies lead to a higher packet delivery rate (PDR) as well as lower mean and standard deviation of the communication delay; thus, increasing the overall measurement accuracy.
In recent years, the adoption of industrial wireless sensor and actuator networks (IWSANs) has greatly increased. However, the time-critical performance of IWSANs is considerably affected by external sources of interference. In particular, when an IEEE 802.11 network is coexisting in the same environment, a significant drop in communication reliability is observed. This, in turn, represents one of the main challenges for a wide-scale adoption of IWSAN. Interference classification through spectrum sensing is a possible step towards interference mitigation, but the long sampling window required by many of the approaches in the literature undermines their run-time applicability in time-slotted channel hopping (TSCH)-based IWSAN. Aiming at minimizing both the sensing time and the memory footprint of the collected samples, a centralized interference classifier based on support vector machines (SVMs) is introduced in this article. The proposed mechanism, tested with sample traces collected in industrial scenarios, enables the classification of interference from IEEE 802.11 networks and microwave ovens, while ensuring high classification accuracy with a sensing duration below 300 ms. In addition, the obtained results show that the fast classification together with a contained sampling frequency ensure the suitability of the method for TSCH-based IWSAN
Energy sampling-based interference detection and identification (IDI) methods collide with the limitations of commercial off-the-shelf (COTS) IoT hardware. Moreover, long sensing times, complexity and inability to track concurrent interference strongly inhibit their applicability in most IoT deployments. Motivated by the increasing need for on-device IDI for wireless coexistence, we develop a lightweight and efficient method targeting interference identification already at the level of single interference bursts. Our method exploits real-time extraction of envelope and model-aided spectral features, specifically designed considering the physical properties of signals captured with COTS hardware. We adopt manifold supervised-learning (SL) classifiers ensuring suitable performance and complexity trade-off for IoT platforms with different computational capabilities. The proposed IDI method is capable of real-time identification of IEEE 802.11b/g/n, 802.15.4, 802.15.1 and Bluetooth Low Energy wireless standards, enabling isolation and extraction of standard-specific traffic statistics even in the case of heavy concurrent interference. We perform an experimental study in real environments with heterogeneous interference scenarios, showing 90%–97% burst identification accuracy. Meanwhile, the lightweight SL methods, running online on wireless sensor networks-COTS hardware, ensure sub-ms identification time and limited performance gap from machine-learning approaches.
The limited coexistence capabilities of current Internet of Things (IoT) wireless standards produce inefficient spectrum utilization and mutual performance impairment. The problem becomes critical in industrial IoT (IIoT) applications, which have stringent quality-of-service (QoS) requirements and very low error tolerance. The constant growth of wireless applications over unlicensed bands mandates then the adoption of dynamic spectrum-access techniques, which can significantly benefit from interference mapping over multiple dimensions of the radio space. In this article, we analyze the critical role of real-time interference detection and classification mechanisms that rely on only IIoT devices, without the added complexity of specialized hardware. The tradeoffs between classification performance and feasibility are analyzed in connection with the implementation on low-complexity IIoT devices. Moreover, we explain how to use such mechanisms for enabling IIoT networks to construct and maintain multidimensional interference maps at runtime in an autonomous fashion. Finally, we give an overview of the opportunities and challenges of using interference maps to enhance the performance of IIoT networks under interference.
The lack of coordinated spectrum access for IoT wireless technologies in unlicensed bands creates inefficient spectrum usage and poses growing concerns in several IoT applications. Spectrum awareness becomes then crucial, especially in the presence of strict quality-of-service (QoS) requirements and mission-critical communication. In this work, we propose a lightweight spectral analysis framework designed for strongly resource-constrained devices, which are the norm in IoT deployments. The proposed solution enables model-based reconstruction of the spectrum of single radio-bursts entirely onboard without DFT processing. The spectrum sampling exploits pattern-based frequency sweeping, which enables the spectral analysis of short radio-bursts while minimizing the sampling error induced by non-ideal sensing hardware. We carry out an analysis of the properties of such sweeping patterns, derive useful theoretical error bounds, and explain how to design optimal patterns for radio front-ends with different characteristics. The experimental campaign shows that the proposed solution enables the estimation of central frequency, bandwidth, and spectral shape of signals at runtime by using a strongly hardware-limited radio platform. Finally, we test the potential of the proposed solution in combination with a proactive blacklisting scheme, allowing a substantial improvement in real-time QoS of a radio link under interference.
Priority-oriented packet transmission (PPT) has been a promising solution for transmitting time-critical packets in timely manner during emergency scenarios in Internet ofThings (IoT). In this paper, we develop two associated discrete time Markov chain (DTMC) models to analyze performance of the PPT in an IoT network. Using the proposed DTMC models, we investigate the effect of traffic prioritization interms of average packet delay for a synchronous medium access control (MAC) protocol. Furthermore, the results obtained from analytical models are validated via discrete-event simulations. Numerical results prove the accuracy of the models and reveal the behavior of priority based packet transmissions.
Wireless technologies are nowadays being considered for implementation in industrial automation. However due to strict reliability and timeliness requirements of time-critical applications, there are many open research challenges for the merger of wireless technologies with the industrial systems. Although many medium access and control (MAC) protocols are proposed in recent years, a coherent effort on both the physical (PHY) and MAC layers is needed. In this paper, we propose a protocol termed as multiple equi-priority MAC (MEP-MAC) which combines the functions of MAC and PHY layers: the MAC layer ensures a deterministic behavior of the system by assigning priorities to the nodes, while non-orthogonal multiple access (NOMA) at PHY layer enables multiple nodes of equal priorities to simultaneously gain the channel access and transmit data to the gateway. We adapt a discrete-time Markov chain (DTMC) model to handle multiple nodes of equal priorities and perform the analytical analysis of the proposed protocol. The results show that the proposed protocol can provide up to 70% and 40% improvement in terms of system throughput and latency respectively as compared to a system that does not leverage NOMA at PHY layer.
Industrial Internet-of-things (IIoT) networks have recently gained enormous attention because of the huge advantages they offer. A typical IIoT network consists of a large number of sensor and actuator devices distributed randomly in an industrial area to automate various processes, where a major goal is to collect data from all these devices and to process it centrally at an aggregator. However, for an efficient system operation, a proficient scheduling mechanism is required due to its direct association with performance parameters. Many existing techniques such as time division multiple access (TDMA), do not perform well in industrial environments due to their stringent timeliness requirements. In this paper, we propose a medium access control (MAC) layer protocol for node scheduling in a scenario where some devices may not be in one-hop range of the aggregator and thus renders a multi-hop mechanism inevitable. A discrete time Markov chain (DTMC) model is proposed to characterize the transmission of multi-tier nodes and the analytical expressions of throughput and latency are derived. It has been oberved that the delay scales linearly with the number of nodes which are away not in one-hop distance of the aggregator. Numerical simulations have been performed to validate the theoretical results.
This paper explores the performance gains achieved by space time block coding (STBC)-based non orthogonal multiple access (NOMA) scheme versus regular cooperative NOMA under the influence of imperfect successive interference cancellation (SIC). Imperfection in the SIC process of NOMA causes performance degradation which is proportional to the number of SIC operations involved. It is demonstrated through outage and sum-rate performance that STBC-NOMA maintains a considerable performance margin from conventional cooperative NOMA even under adverse imperfections in SIC. Furthermore, closed-form expressions of signal to interference ratio (SIR) at user equipment (UE) and outage probability for STBC-NOMA under imperfect SIC are also derived and validated.
Collaborative augmented reality (AR), which enables interaction and consistency in multi-user AR scenarios, is a promising technology for AR-guided remote monitoring, optimization, and troubleshooting of complex manufacturing processes. However, for uplink high data rate demands in collaborative-AR, the design of an efficient transmission and resource allocation scheme is demanding in resource-constrained wireless systems. To address this challenge, we propose a collaborative non-orthogonal multiple access (C-NOMA)-enabled transmission scheme by exploiting the fact that multi-user interaction often leads to common and individual views of the scenario (e.g., the region of interest). C-NOMA is designed as a two-step transmission scheme by treating these views separately and allowing users to offload the common views partially. Further, we define an optimization problem to jointly optimize the time and power allocation for AR users, with an objective of minimizing the maximum rate-distortion of the individual views for all users while guaranteeing a target distortion of their common view for its mutual significance. For its inherent non-linearity and non-convexity, we solve the defined problem using a primal-dual interior-point algorithm with a filter line search as well as by developing a successive convex approximation (SCA) method. The simulation results demonstrate that the optimized C-NOMA outperforms the non-collaborative baseline scheme by 23.94% and 77.28% in terms of energy consumption and achievable distortion on the common information, respectively.
Through the lens of average and peak age-of-information (AoI), this paper takes a fresh look into the uplink medium access solutions for mission-critical (MC) communication coexisting with enhanced mobile broadband (eMBB) service. Considering the stochastic packet arrivals from an MC user, we study three access schemes: orthogonal multiple access (OMA) with eMBB preemption (puncturing), non-orthogonal multiple access (NOMA), and rate-splitting multiple access (RSMA), the latter two both with concurrent eMBB transmissions. Puncturing is found to reduce both average AoI and peak AoI (PAoI) violation probability but at the expense of decreased eMBB user rates and increased signaling complexity. Conversely, NOMA and RSMA offer higher eMBB rates but may lead to MC packet loss and AoI degradation. The paper systematically investigates the conditions under which NOMA or RSMA can closely match the average AoI and PAoI violation performance of puncturing while maintaining data rate gains. Closed-form expressions for average AoI and PAoI violation probability are derived, and conditions on the eMBB and MC channel gain difference with respect to the base station are analyzed. Additionally, optimal power and rate splitting factors in RSMA are determined through an exhaustive search to minimize MC outage probability. Notably, our results indicate that with a small loss in the average AoI and PAoI violation probability the eMBB rate in NOMA and RSMA can be approximately five times higher than that achieved through puncturing.
The increasing proliferation of Internet-of-things (IoT) networks in a given space requires exploring various communication solutions (e.g., cooperative relaying, non-orthogonal multiple access, spectrum sharing) jointly to increase the performance of coexisting IoT systems. However, the design complexity of such a system increases, especially under the constraints of performance targets. In this respect, this paper studies multiple-access enabled relaying by a lower-priority secondary system, which cooperatively relays the incoming information to the primary users and simultaneously transmits its own data. We consider that the direct link between the primary transmitter-receiver pair uses orthogonal multiple access in the first phase. In the second phase, a secondary transmitter adopts a relaying strategy to support the direct link while it uses non-orthogonal multiple access (NOMA) to serve the secondary receiver. As a relaying scheme, we propose a piece-wise and forward (PF) relay protocol, which, depending on the absolute value of the received primary signal, acts similar to decode-and-forward (DF) and amplify-and-forward (AF) schemes in high and low signal-to-noise ratio (SNR), respectively. By doing so, PF achieves the best of these two relaying protocols using the adaptive threshold according to the transmitter-relay channel condition. Under PF-NOMA, first, we find the achievable rate region for primary and secondary receivers, and then we formulate an optimization problem to derive the optimal PF-NOMA time and power fraction that maximize the secondary rate subject to reliability constraints on both the primary and the secondary links. Our simulation results and analysis show that the PF-NOMA outperforms DF-NOMA and AF-NOMA-based relaying techniques in terms of achievable rate regions and rate-guaranteed relay locations.
Through the lens of the age-of-information (AoI) metric, this paper takes a fresh look into the performance of coexisting enhanced mobile broadband (eMBB) and ultra-reliable low-latency (URLLC) services in the uplink scenario. To reduce AoI, a URLLC user with stochastic packet arrivals has two options: orthogonal multiple access (OMA) with the preemption of the eMBB user (labeled as puncturing) or non-orthogonal multiple access (NOMA) with the ongoing eMBB transmission. Puncturing leads to lower average AoI at the expense of the decrease in the eMBB user's rate, as well as in signaling complexity. On the other hand, NOMA can provide a higher eMBB rate at the expense of URLLC packet loss due to interference and, thus, the degradation in AoI performance. We study under which conditions NOMA could provide an average AoI performance that is close to the one of the puncturing, while maintaining the gain in the data rate. To this end, we derive a closed-form expression for the average AoI and investigate conditions on the eMBB and URLLC distances from the base station at which the difference between the average AoI in NOMA and in puncturing is within some small gap β. Our results show that with β as small as 0.1 minislot, the eMBB rate in NOMA can be roughly 5 times higher than that of puncturing. Thus, by choosing an appropriate access scheme, both the favorable average AoI for URLLC users and the high data rate for eMBB users can be achieved.
Anomaly-based In-Vehicle Intrusion Detection System (IV-IDS) is one of the protection mechanisms to detect cyber attacks on automotive vehicles. Using artificial intelligence (AI) for anomaly detection to thwart cyber attacks is promising but suffers from generating false alarms and making decisions that are hard to interpret. Consequently, this issue leads to uncertainty and distrust towards such IDS design unless it can explain its behavior, e.g., by using eXplainable AI (XAI). In this paper, we consider the XAI-powered design of such an IV-IDS using CAN bus data from a public dataset, named “Survival”. Novel features are engineered, and a Deep Neural Network (DNN) is trained over the dataset. A visualization-based explanation, “VisExp”, is created to explain the behavior of the AI-based IV-IDS, which is evaluated by experts in a survey, in relation to a rule-based explanation. Our results show that experts’ trust in the AI-based IV-IDS is significantly increased when they are provided with VisExp (more so than the rule-based explanation). These findings confirm the effect, and by extension the need, of explainability in automated systems, and VisExp, being a source of increased explainability, shows promise in helping involved parties gain trust in such systems. Author
Across all industries, digitalization and automation are on the rise under the Industry 4.0 vision, and the forest industry is no exception. The forest industry depends on distributed flows of raw materials to the industry through various phases, wherein the typical workflow of timber loading and offloading is finding traction in using automation and 5G wireless networking technologies to enhance efficiency and reduce cost. This article presents one such ongoing effort in Sweden, <italic>Remote-Timber</italic>—demonstrating a 5G-connected teleoperation use-case within a workflow of timber terminal—and disseminates its business attractiveness as well as first measurement results on network performance. Also, it outlines the future needs of the 5G network design/optimization from teleoperation perspective. Overall, the motivation of this article is to disseminate our early-stage findings and reflections to the industrial and academic communities for furthering the research and development activities in enhancing 5G networks for verticals.
Ultra-reliable and low-latency communications (URLLC) is an emerging feature in 5G and beyond wireless systems, which is introduced to support stringent latency and reliability requirements of mission-critical industrial applications. In many potential applications, multiple sensors/actuators collaborate and require isochronous operation with strict and bounded jitter, e.g., 1µs. To this end, network time synchronization becomes crucial for real-time and isochronous communication between a controller and the sensors/actuators. In this paper, we look at different applications in factory automation and smart grids to reveal the requirements of device-level time synchronization and the challenges in extending the high-granularity timing information to the devices. Also, we identify the potential over-the-air synchronization mechanisms in 5G radio interface, and discuss the needed enhancements to meet the jitter constraints of time-sensitive URLLC applications.
The wireless edge is about distributing intelligence to wireless devices wherein the distribution of accurate time reference is essential for time-critical machine-type communication (cMTC). In 5G-based cMTC, enabling time synchronization in the wireless edge means moving beyond the current synchronization needs and solutions in 5G radio access. In this article, we analyze the device-level synchronization needs of potential cMTC applications: industrial automation, power distribution, vehicular communication, and live audio/video production. We present an overthe- air synchronization scheme comprising 5G air interface parameters and discuss their associated timing errors. We evaluate the estimation error in device-to-base-station propagation delay from timing advance under random errors and show how to reduce the estimation error. In the end, we identify the random errors specific to dense multipath fading environments and discuss countermeasures.
Cellular networks are envisioned to be a cornerstone in future industrial IoT (IIoT) wireless connectivity in terms of fulfilling the industrial-grade coverage, capacity, robustness, and ultra-responsiveness requirements. This vision has led to verticals-centric service-based architecture in 5G radio access and core networks, with the capabilities to include 5G-AI-Edge ecosystem for computing, intelligence, and flexible deployment and integration options (e.g., centralized and distributed, physical and virtual) while eliminating the privacy/security concerns of mission-critical systems. In this paper, driven by the industrial interest in enabling large-scale wireless IIoT deployments for operational agility, flexible, and cost-efficient production, we present the state-of-the-art 5G architecture, transformative technologies, and recent design trends, which we also selectively supplemented with new results. We also identify several research challenges in these promising design trends that beyond-5G systems must overcome to support the rapidly unfolding transition in creating value-centric industrial wireless networks.
Fine-grained and wide-scale connectivity is a precondition to fully digitalize the manufacturing industry. Driven by this need, new technologies such as time-sensitive networking (TSN), 5G wireless networks, and industrial Internet-of-things (IIoT) are being applied to industrial communication networks to reach the desired connectivity spectrum. With TSN emerging as a wired networking solution, converging IT and operational technology (OT) data streams, 5G is upscaling its access and core networks to function as an independent or a transparent TSN carrier in demanding OT use-cases. In this article, we discuss the drivers for future industrial wireless systems and review the role of 5G and its industrial-centric evolution towards meeting the strict performance standards of factories. We also elaborate on the 5G deployment options, including frequency spectrum allocation and private networks, to help the factory owners discern various dimensions of solution space and concerns to integrate 5G in industrial networks.
Internet-of-things (IoT), with the vision of billions of connected devices, is bringing a massively heterogeneouscharacter to wireless connectivity in unlicensed bands. The heterogeneity in medium access parameters, transmit power and activity levels among the coexisting networks leads to detrimental cross-technology interference.The stochastic traffic distributions, shaped under CSMA/CA rules, of an interfering network and channel fading makes it challenging to model and analyze the performanceof an interfered network. In this paper, to study the temporal interaction between the traffic distributions of two coexisting networks, we develop a renewal-theoretic packet collision model and derive a generic collision-time distribution (CTD) function of an interfered system. The CTD function holds for any busy- and idle-time distributions of the coexisting traffic. As the earlier studies suggesta long-tailed idle-time statistics in real environments, the developed model only requires the Laplace transform of long-tailed distributions to find the CTD. Furthermore,we present a packet error rate (PER) model under the proposed CTD and multipath fading of the interfering signals. Using this model, a computationally efficient PERapproximation for interference-limited case is developed to analyze the performance of an interfered link.
In this paper, we study cross-layer optimization of low-power wireless links for reliability-aware applications while considering both the constraints and the non-ideal characteristics of the hardware in Internet-of-things (IoT) devices. Specifically, we define an energy consumption (EC) model that captures the energy cost—of transceiver circuitry, power amplifier, packet error statistics, packet overhead, etc.—in delivering a useful data bit. We derive the EC models for an ideal and two realistic non-linear power amplifier models. To incorporate packet error statistics, we develop a simple, in the form of elementary functions, and accurate closed-form packet error rate (PER) approximation in Rayleigh block-fading. Using the EC models, we derive energy optimal yet reliability and hardware compliant conditions for limiting unconstrained optimal signal-to-noise ratio (SNR), and payload size. Together with these conditions, we develop a semi-analytic algorithm for resource-constrained IoT devices to jointly optimize parameters on physical (modulation size, SNR) and medium access control (payload size and the number of retransmissions) layers in relation to link distance. Our results show that despite reliability constraints, the common notion—higher-order M-ary modulations (MQAM) are energy optimal for short-range communication—prevails, and can provide up to 180% lifetime extension as compared to often used OQPSK modulation in IoT devices. However, the reliability constraints reduce both their range and the energy efficiency, while non-ideal traditional PA reduces the range further by 50% and diminishes the energy gains unless a better PA is used.
The vision of connecting billions of battery operated devices to be used for diverse emerging applications calls for a wireless communication system that can support stringent reliability and latency requirements. Both reliability and energy efficiency are critical for many of these applications that involve communication with short packets which undermine the coding gain achievable from large packets. In this paper, we study a cross-layer approach to optimize the performance of low-power wireless links. At first, we derive a simple and accurate packet error rate (PER) expression for uncoded schemes in block fading channels based on a new proposition that shows that the waterfall threshold in the PER upper bound in Nakagami-m fading channels is tightly approximated by the m-th moment of an asymptotic distribution of PER in AWGN channel. The proposed PER approximation establishes an explicit connection between the physical and link layers parameters, and the packet error rate. We exploit this connection for cross-layer design and optimization of communication links. To this end, we propose a semi-analytic framework to jointly optimize signal-to-noise ratio (SNR) and modulation order at physical layer, and the packet length and number of retransmissions at link layer with respect to distance under the prescribed delay and reliability constraints.
We present a generic approximation of the packet error rate (PER) function of uncoded schemes in the AWGN channel using extreme value theory (EVT). The PER function can assume both the exponential and the Gaussian Q-function bit error rate (BER) forms. The EVT approach leads us to a best closed-form approximation, in terms of accuracy and computational efficiency, of the average PER in block-fading channels. The numerical analysis shows that the approximation holds tight for any value of SNR and packet length whereas the earlier studies approximate the average PER only at asymptotic SNRs and packet lengths.
Low-power wide-area network (LPWAN) technologies are gaining momentum for internet-of-things (IoT) applications since they promise wide coverage to a massive number of battery-operated devices using grant-free medium access. LoRaWAN, with its physical (PHY) layer design and regulatory efforts, has emerged as the widely adopted LPWAN solution. By using chirp spread spectrum modulation with qausi-orthogonal spreading factors (SFs), LoRa PHY offers coverage to wide-area applications while supporting high-density of devices. However, thus far its scalability performance has been inadequately modeled and the effect of interference resulting from the imperfect orthogonality of the SFs has not been considered. In this paper, we present an analytical model of a single-cell LoRa system that accounts for the impact of interference among transmissions over the same SF (co-SF) as well as different SFs (inter-SF). By modeling the interference field as Poisson point process under duty-cycled ALOHA, we derive the signal-to-interference ratio (SIR) distributions for several interference conditions. Results show that, for a duty cycle as low as 0.33%, the network performance under co-SF interference alone is considerably optimistic as the inclusion of inter-SF interference unveils a further drop in the success probability and the coverage probability of approximately 10% and 15%, respectively for 1500 devices in a LoRa channel. Finally, we illustrate how our analysis can characterize the critical device density with respect to cell size for a given reliability target.
The scale of wireless technologies' penetration in our daily lives, primarily triggered by Internet of Things (IoT)-based smart cities, is beaconing the possibilities of novel localization and tracking techniques. Recently, low-power wide-area network (LPWAN) technologies have emerged as a solution to offer scalable wireless connectivity for smart city applications. LoRa is one such technology, which provides energy efficiency and wide-area coverage. This article explores the use of intelligent machine learning techniques, such as support vector machines, spline models, decision trees, and ensemble learning, for received signal strength indicator (RSSI)-based ranging in LoRa networks on a training dataset collected in two different environments: indoors and outdoors. The suitable ranging model is then used to experimentally evaluate the accuracy of localization and tracking using trilateration in the studied environments. Later, we present the accuracy of a LoRa-based positioning system (LPS) and compare it with the existing ZigBee, WiFi, and Bluetooth-based solutions. In the end, we discuss the challenges of satellite-independent tracking systems and propose future directions to improve accuracy and provide deployment feasibility.
With the increase of connected Internet-of-things (IoT) devices, the need for low-power wide-area networks (LP-WANs) is imminent, and LoRaWAN is one such technology that offers an elegant solution to the problem of long-range communication and battery consumption. A parameter of special interest in LoRaWAN is the spreading factor (SF), and it is often assumed that communication between different SFs is independent of each other. However, this claim has been practically debunked by many works, proving that SFs have imperfect orthogonality. To maximize connectivity and throughput, several techniques have been introduced, such as non-orthogonal-multiple-access (NOMA) and dynamic resource allocation. NOMA is getting a lot of attention recently, especially for IoT networks, because it embraces interference and tries to obtain desired information packets from corrupted ones. Furthermore, NOMA can be easily implemented on the base-station side by using the principle of successive interference cancellation (SIC). In this paper, we investigate how SIC, under the assumption of imperfect orthogonality of SF channels, can be used to increase the performance of the system. We find the expressions for success and coverage probability considering various SF allocation schemes and found the most efficient scheme for different scenarios.
With the ubiquitous growth of Internet-of-things (IoT) devices, current low-power wide-area network (LPWAN) technologies will inevitably face performance degradation due to congestion and interference. The rule-based approaches to assign and adapt the device parameters are insufficient in dynamic massive IoT scenarios. For example, the adaptive data rate (ADR) algorithm in LoRaWAN has been proven inefficient and outdated for large-scale IoT networks. Meanwhile, new solutions involving machine learning (ML) and reinforcement learning (RL) techniques are shown to be very effective in solving resource allocation in dense IoT networks. In this article, we propose a new concept of using two independent learning approaches for allocating spreading factor (SF) and transmission power to the devices using a combination of a decentralized and centralized approach. SF is allocated to the devices using RL for contextual bandit problem, while transmission power is assigned centrally by treating it as a supervised ML problem. We compare our approach with existing state-of-the-art algorithms, showing a significant improvement in both network level goodput and energy consumption, especially for large and highly congested networks.
In this work, we address the problem of jointly optimizing the transmit power and blocklength of a two-users scenario for orthogonal multiple access (OMA) and non-orthogonal (NOMA) access schemes. We formulate an optimization problem to obtain the energy-optimal blocklength and transmit power under ultra-reliable and low-latency communications (URLLC) reliability and latency constraints. The aim is to minimize the energy consumption at short-blocklength regime. Due to the problem's complexity, we decompose it into two sub-problems for the OMA case, where the base station (BS) employs a 2D search for strong user, and the bisection method for the weak user. On the contrary, we find the sufficient transmit power conditions for NOMA to obtain the feasible solution. Our results show that the minimum required energy increases with the reliability requirement for both the OMA and NOMA, while NOMA consumes more energy than OMA for the same reliability target. Moreover, the results indicate that NOMA reduces latency due to better blocklength utilization compared to OMA when the channel gain disparity between the users is small.