This is the QA environment of the MD-SOAR platform. It is for TESTING PURPOSES ONLY. Navigate to https://mdsoar.org to access the latest open access research from MD-SOAR institutions.
QA Environment
 

UMBC Center for Real-time Distributed Sensing and Autonomy

Permanent URI for this collection

The vision of this center is to advance AI-based autonomy in order to deliver safe, effective, and resilient new capabilities across a variety of complex mission types, including search-and-rescue, persistent surveillance, managing, adapting and optimizing smart, connected robots and machinery, and augmenting humans in performing complex analytical and decision-making tasks. These systems are continually getting better, but to achieve their potential, there are still numerous developments required to improve their capability, command and control, interoperability, resiliency and trustworthiness.

Focus Areas: Networking, Sensing and IoT for the Battlefield, IoT for the Battlefield. Adaptive Machine/Deep Learning, Individual and Collective Health Assessment, Adaptive Cybersecurity, Cross Domain Machine Learning with Few Labels, AI/ML on Edg, Predictive Maintenance

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item
    Evaluating Machine Learning and Statistical Models for Greenland Subglacial Bed Topography
    (2023-11-05) Yi, Katherine; Dewar, Angelina; Tabassum, Tartela; Lu, Jason; Chen, Ray; Alam, Homayra; Faruque, Omar; Li, Sikan; Morlighem, Mathieu; Wang, Jianwu
    The purpose of this research is to study how different machine learning and statistical models can be used to predict bedrock topography under the Greenland ice sheet using ice-penetrating radar and satellite imagery data. Accurate bed topography representations are crucial for understanding ice sheet stability and vulnerability to climate change. We explore nine predictive models including dense neural network, longshort term memory, variational auto-encoder, extreme gradient boosting (XGBoost), gaussian process regression, and kriging based residual learning. Model performance is evaluated with mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R²), and terrain ruggedness index (TRI). In addition to testing various models, different interpolation methods, including nearest neighbor, bilinear, and kriging, are also applied in preprocessing. The XGBoost model with kriging interpolation exhibit strong predictive capabilities but demands extensive resources. Alternatively, the XGBoost model with bilinear interpolation shows robust predictive capabilities and requires fewer resources. These models effectively capture the complexity of the terrain hidden under the Greenland ice sheet with precision and efficiency, making them valuable tools for representing spatial patterns in diverse landscapes.
  • Item
    HeteroEdge: Addressing Asymmetry in Heterogeneous Collaborative Autonomous Systems
    (2023-05-05) Anwar, Mohammad Saeid; Dey, Emon; Devnath, Maloy Kumar; Ghosh, Indrajeet; Khan, Naima; Freeman, Jade; Gregory, Timothy; Suri, Niranjan; Jayarajah, Kasthuri; Ramamurthy, Sreenivasan Ramasamy; Roy, Nirmalya
    Gathering knowledge about surroundings and generating situational awareness for IoT devices is of utmost importance for systems developed for smart urban and uncontested environments. For example, a large-area surveillance system is typically equipped with multi-modal sensors such as cameras and LIDARs and is required to execute deep learning algorithms for action, face, behavior, and object recognition. However, these systems face power and memory constraints due to their ubiquitous nature, making it crucial to optimize data processing, deep learning algorithm input, and model inference communication. In this paper, we propose a self-adaptive optimization framework for a testbed comprising two Unmanned Ground Vehicles (UGVs) and two NVIDIA Jetson devices. This framework efficiently manages multiple tasks (storage, processing, computation, transmission, inference) on heterogeneous nodes concurrently. It involves compressing and masking input image frames, identifying similar frames, and profiling devices to obtain boundary conditions for optimization.. Finally, we propose and optimize a novel parameter split-ratio, which indicates the proportion of the data required to be offloaded to another device while considering the networking bandwidth, busy factor, memory (CPU, GPU, RAM), and power constraints of the devices in the testbed. Our evaluations captured while executing multiple tasks (e.g., PoseNet, SegNet, ImageNet, DetectNet, DepthNet) simultaneously, reveal that executing 70% (split-ratio=70%) of the data on the auxiliary node minimizes the offloading latency by approx. 33% (18.7 ms/image to 12.5 ms/image) and the total operation time by approx. 47% (69.32s to 36.43s) compared to the baseline configuration (executing on the primary node).
  • Item
    A Comprehensive Study of Gradient Inversion Attacks in Federated Learning and Baseline Defense Strategies
    (IEEE, 2023-04-10) Ovi, Pretom Roy; Gangopadhyay, Aryya
    With a greater emphasis on data confidentiality and legislation, collaborative machine learning algorithms are being developed to protect sensitive private data. Federated learning (FL) is the most popular of these methods, and FL enables collaborative model construction among a large number of users without the requirement for explicit data sharing. Because FL models are built in a distributed manner with gradient sharing protocol, they are vulnerable to “gradient inversion attacks,” where sensitive training data is extracted from raw gradients. Gradient inversion attacks to reconstruct data are regarded as one of the wickedest privacy risks in FL, as attackers covertly spy gradient updates and backtrack from the gradients to obtain information about the raw data without compromising model training quality. Even without prior knowledge about the private data, the attacker can breach the secrecy and confidentiality of the training data via the intermediate gradients. Existing FL training protocol have been proven to exhibit vulnerabilities that can be exploited by adversaries both within and outside the system to compromise data privacy. Thus, it is critical to make FL system designers aware of the implications of future FL algorithm design on privacy preservation. Motivated by this, our work focuses on exploring the data confidentiality and integrity in FL, where we emphasize the intuitions, approaches, and fundamental assumptions used by the existing strategies of gradient inversion attacks to retrieve the data. Then we examine the limitations of different approaches and evaluate their qualitative performance in retrieving raw data. Furthermore, we assessed the effectiveness of baseline defense mechanisms against these attacks for robust privacy preservation in FL.
  • Item
    AirDrop: Towards Collaborative, Multi-Resolution Air-Ground Teaming for Terrain-Aware Navigation
    (ACM, 2023-02-22) Jayarajah, Kasthuri; Gart, Sean; Gangopadhyay, Aryya
    Driven by advances in deep neural network models that fuse multimodal input such as RGB and depth representations to accurately understand the semantics of the environments (e.g., objects of different classes, obstacles, etc.), ground robots have gone through dramatic improvements in navigating unknown environments. Relying on their singular, limited perspective, however, can lead to suboptimal paths that are wasteful and quickly drain out their batteries, especially in the case of long-horizon navigation. We consider a special class of ground robots, that are air-deployed, and pose the central question: can we leverage aerial perspectives of differing resolutions and fields of view from air–to–ground robots to achieve superior terrain-aware navigation? We posit that a key enabler of this direction of research is collaboration between such robots, to collectively update their route plans, leveraging advances in long-range communication and on-board computing. Whilst each robot can capture a sequence of high resolution images during their descent, intelligent, lightweight pre-processing on-board can dramatically reduce the size of the data that needs to be shared among its peers over severely bandwidth-limited long range communication channels (e.g., over sub gigahertz frequencies). In this paper, we discuss use cases and key technical challenges that must be resolved to realize our vision for collaborative, multi-resolution terrain-awareness for air–to–ground robots.
  • Item
    A Reliable and Low Latency Synchronizing Middleware for Co-simulation of a Heterogeneous Multi-Robot Systems
    (2022-11-10) Dey, Emon; Walczak, Mikolaj; Anwar, Mohammad Saeid; Roy, Nirmalya
    Search and rescue, wildfire monitoring, and flood/hurricane impact assessment are mission-critical services for recent IoT networks. Communication synchronization, dependability, and minimal communication jitter are major simulation and system issues for the time-based physics-based ROS simulator, event-based network-based wireless simulator, and complex dynamics of mobile and heterogeneous IoT devices deployed in actual environments. Simulating a heterogeneous multi-robot system before deployment is difficult due to synchronizing physics (robotics) and network simulators. Due to its master-based architecture, most TCP/IP-based synchronization middlewares use ROS1. A real-time ROS2 architecture with masterless packet discovery synchronizes robotics and wireless network simulations. A velocity-aware Transmission Control Protocol (TCP) technique for ground and aerial robots using Data Distribution Service (DDS) publish-subscribe transport minimizes packet loss, synchronization, transmission, and communication jitters. Gazebo and NS-3 simulate and test. Simulator-agnostic middleware. LOS/NLOS and TCP/UDP protocols tested our ROS2-based synchronization middleware for packet loss probability and average latency. A thorough ablation research replaced NS-3 with EMANE, a real-time wireless network simulator, and masterless ROS2 with master-based ROS1. Finally, we tested network synchronization and jitter using one aerial drone (Duckiedrone) and two ground vehicles (TurtleBot3 Burger) on different terrains in masterless (ROS2) and master-enabled (ROS1) clusters. Our middleware shows that a large-scale IoT infrastructure with a diverse set of stationary and robotic devices can achieve low-latency communications (12% and 11% reduction in simulation and real) while meeting mission-critical application reliability (10% and 15% packet loss reduction) and high-fidelity requirements of mission-critical applications..
  • Item
    ARIS: A Real Time Edge Computed Accident Risk Inference System
    (IEEE, 2021-10-08) Ovi, Pretom Roy; Dey, Emon; Roy, Nirmalya; Gangopadhyay, Aryya
    To deploy an intelligent transport system in urban environment, an effective and real-time accident risk prediction method is required that can help maintain road safety, provide adequate level of medical assistance and transport in case of an emergency. Reducing traffic accidents is an important problem for increasing public safety, so accident analysis and prediction have been a subject of extensive research in recent time. Even if a traffic hazard occurs, a readily deployable structure with an accurate prediction of accident can contribute to better management of rescue resources. But the significant shortcomings of current studies are the use of small-scale datasets with minimal scope, being based on extensive data sets, and not being applicable for real-time purposes. To overcome these challenges, we propose ARIS: a system for real-time traffic accident prediction built on a traffic accident dataset named ‘US-Accidents’ which covers 49 states of United States, collected from February 2016 to June 2020. Our approach is based on a deep neural network model that utilizes a variety of data characteristics, such as time-sensitive weather data, textual information, and discerning factors. We have tested ARIS against multiple baselines through a comprehensive series of experiments across several major cities of USA, and we have noticed significant improvement during inference especially in detecting accident classes. Additionally, to make our model edge-implementable we have compressed our model using a joint technique of magnitude-based weight pruning and model quantization. We have also demonstrated the inference results along with power consumption profiling after deploying the model on a resource constrained environment that consists of Intel Neural Compute Stick 2 (NCS2) with Raspberry Pi 4B (RPi4). Our investigation and observations indicate major improvements to predict unusual traffic accident event even after model compression and deployment. We have managed to reduce the model size and inference time by ≈ 6x, and ≈ 70 % respectively with insignificant drop in performance. Furthermore, to better understand the importance of each individual type of variables used in our analysis, we have showcased a comprehensive ablation study.
  • Item
    Multidisciplinary Education on Big Data + HPC + Atmospheric Sciences
    (National Science Foundation, 2017-11-01) Wang, Jianwu; Gobbert, Matthias K.; Zhang, Zhibo; Gangopadhyay, Aryya; Page, Glenn G.
    We present a new initiative to create a training program or graduate-level course (cybertraining.umbc.edu) in big data applied to atmospheric sciences as application area and using high-performance computing as indispensable tool. The training consists of instruction in all three areas of "Big Data + HPC + Atmospheric Sciences" supported by teaching assistants and followed by faculty-guided project research in a multidisciplinary team of participants from each area. Participating graduate students, post-docs, and junior faculty from around the nation will be exposed to multidisciplinary research and have the opportunity for significant career impact. The paper discusses the challenges, proposed solutions, practical issues of the initiative, and how to integrate high-quality developmental program evaluation into the improvement of the initiative from the start to aid in ongoing development of the program.