Internet of Things
In cities across the globe, road transport remains an important source of air pollutants that are linked with acute and chronic health effects. In the last 20 years, my research group have investigated these associations in human challenge chamber studies in Umea, Sweden; real-world exposure scenarios in London and recently in the megacity Beijing, China. The main particulate matter (PM) components originating from road traffic are engine emissions. The largest single source is derived from diesel exhaust (DE). Indeed, owing to the increased market penetration of diesel engines in many European countries and the fact that they generate up to 100 times as many particles as comparable gasoline engines with three-way catalytic convertors, diesel exhaust particles (DEPs) contribute significantly to the airshed in many of the world’s largest cities.
To investigate effects on the airways, human volunteers (healthy and/or mild asthmatic) were exposed for 1–2 h to whole DE (particulates and the associated gas phase) from an idling engine at concentrations ranging from environmentally relevant (PM10 100 ug/m2, 0.7 ppm NO2) to those commonly experienced in busy diesel-dominated traffic environments (PM10 300 g/m2, 1.6 ppm NO2). By performing blood, bronchoalveloar lavage (BAL), and bronchial mucosal biopsy sampling after exposure, these studies have been instrumental in uncovering a systemic and pulmonary inflammatory response attributed, in part, to the oxidative properties of exhaust PM.
Away from orthodox, controlled exposure chamber studies, work was next undertaken in London. Using the city as the laboratory, real-world exposure scenarios were used to investigate the respiratory effects of short-term exposure to diesel traffic. In adults with mild to moderate asthma, walking for 2 h along a busy city street where traffic is entirely diesel powered (as opposed to in a nearby park) resulted in a significant but asymptomatic reduction in lung function. In line with our studies of humans in exposure chambers and our current understanding of the chain of molecular events, roadside traffic exposures also induced inflammatory changes, namely an increase in sputum neutrophil counts and IL-8 and myeloperoxidase concentrations.
As described above from my research in Umea and London evidence supports an interactive chain of events linking pollution-induced pulmonary and systemic oxidative stress, inflammatory events, and translocation of particle constituents with an associated risk of vascular dysfunction, atherosclerosis, and ischemic cardiovascular and obstructive pulmonary diseases. It is now clearly recognised that exposure to combustion-related PM, at concentrations experienced by populations throughout the world, contributes to pulmonary and cardiac disease through multiple mechanistic pathways that are complex and interdependent.
In contrast to the UK’s long industrial heritage, China has undergone rapid industrialization over the past few decades, adding thousands of kilometres of urban road and hundreds of millions of vehicles. PM emissions from traffic have contributed to increasingly poor air quality in Beijing, threatening public health. In 2016 we started work on a new project in Beijing – Effects of air pollution on cardiopulmonary disease in urban and peri-urban residents in Beijing (AIRLESS). We are examining the impact of air pollution on the health of residents who live in the centre of Beijing with residents who live outside the Beijing urban sprawl in a rural area. The 120 individuals in the rural area are exposed to pollution, but it tends to be more of a coal/biomass type of pollution experienced as part of their everyday lives, whereas in central Beijing it’s more traffic-based pollution. We provide the volunteers with personal air quality monitors and measuring their exposure for 24 hours a day for seven days. This data provides us with a unique view of their personal exposure to air pollution. With this approach, we will learn if the pollution in China results in the same type of biological responses we have seem previously in Umea and London.
The study of complex systems and the control issues arising from them have remained as one of the prime research problems since many years. The complex systems can be roughly described as the systems having many components within it, which are interacting with each other. If these component interact in a nonlinear fashion, then such systems are referred as complex nonlinear systems. The popular examples include biological systems, network (power or communication) systems, underactuated systems in aerospace and robotics, economic systems, bio-mimicking engineering systems, multi-agent systems and so on.
Towards this aim of exploring such various complex nonlinear systems, we predominantly look into a class of underactuated systems and address the complete control problem – right from the system representation, control design, stability analysis and experimental results. We also briefly look into a special underactuated system of slosh-container and summarise the work done in this problem.
At the end, some future prospects of the current work and few problem areas will be discussed which would constitute the intended thrust for my further research.
Deploying autonomy in expensive safety-critical or high-risk systems require guarantees of safety. Two major challenges in providing these assurances are the stochasticity and the high dimensionality of the system. Stochasticity may capture human actions, disturbance effects, and mitigate the inevitable limitations in mathematical models; and high-dimensionality is inevitable as model fidelity improves. The desired guarantee may be obtained by solving the stochastic reach-avoid problem, a stochastic optimal control problem with a multiplicative cost function. Existing approaches provide approximations and suffer from the curse of dimensionality. We propose scalable algorithms to underapproximate the stochastic reach-avoid probability and the associated sets using convex optimization, Fourier analysis, and computational geometry. Our approach is grid-free and recursion-free and enables verification of high-dimensional stochastic dynamical systems. We apply our method to problems in stochastic target capture using quadrotors and satellite rendezvous and docking.
One of the grand challenges of this century is to understand the information processing architecture of the brain to develop intelligent computing platforms. Various neuro-biological studies have shown that information processing in biology relies on impulse like signals emitted by neurons called action potentials. Motivated by this form of information representation, Spiking Neural Networks (SNNs) have been proposed where the timing of spikes generated by artificial neurons is central to its learning and inference capabilities. My research work aims to investigate the computational power of the biologically plausible SNNs and quantify their hardware efficiency on existing/emerging platforms compared to the state-of-the-art artificial neural networks used in machine learning today.
As an exemplary illustration of spike based learning and inference, I will describe a novel spiking neural network (SNN) for automated, real-time handwritten digit classification and its implementation on a GP-GPU platform. Information processing within the network, from feature extraction to classification is implemented by mimicking the basic aspects of neuronal spike initiation and propagation in the brain. The feature extraction layer of the SNN uses fixed synaptic weight maps to extract the key features of the image and the classifier layer uses the recently developed NormAD approximate gradient descent based supervised learning algorithm for spiking neural networks to adjust the synaptic weights. On the standard MNIST database images of handwritten digits, the SNN achieves an accuracy of 99.80% on the training set and 98.06% on the test set, with nearly 4 X fewer parameters compared to the state-of-the-art spiking networks. The SNN is implemented on a GPU based user-interface system to infer digits written by different users within an SNN emulation time of less than 100 ms.
This talk addresses in the main the problem of secure control of networked cyber-physical systems, and, title notwithstanding, a digression into mm-wave networks that have quickly become of great topical interest since the FCC release of 10.85 GHz of spectrum in July 2016. We consider physical plants controlled by multiple actuators and sensors communicating over a network, where some sensors and actuators could be “malicious.” A malicious sensor may not report the measurement that it observes truthfully, while a malicious actuator may not apply actuation signals in accordance with the designed control policy.
In the first segment of the talk, we introduce the notions of securable and unsecurable subspaces of a linear dynamical system, which have important operational meanings in the context of secure control. These subspaces may be regarded as analogs of the controllable and unobservable subspaces reexamined in an era where there is intense interest in cyber security of control systems.
In the second segment of the talk, we address the problem of detecting malicious sensors in a system. We propose a general technique, called “Dynamic Watermarking’,’ by which honest actuators in the system can detect the actions of malicious sensors, and disable closed-loop control based on their information.
We then digress to Medium Access Control (MAC) design for mm-wave wireless networks. The high directionality of mm-wave nodes introduces the problem of deafness, which renders conventional MAC protocols such as CSMA/CA ineffective in orchestrating the medium access. We outline some preliminary results on TrackMAC, a MAC protocol designed for mm-wave wireless networks, and show how it achieves efficient medium access.
This talk is based on several joint works with Prof. P. R. Kumar, Woo-Hyun Ko, and Simon Yau of Texas A&M University, and Dr. Amal Ekbal, Dr. Ahsan Aziz, and Dr. Nikhil Kundargi of National Instruments.
Dynamical systems are all around us, that are complex in terms of having constraints, uncertainty, nonlinearity or being distributed. Control system enforces a desired behavior on the dynamical system evolution, with applications to diverse branches of science and engineering from robotics to neuroscience that has had enormous impact on society.
In particular, model predictive control technology revolutionized industrial control as it introduced a systematic means of handling constraints by making use of predictions over models and solving the resulting (non)convex optimization. Because of its online computations, it is naturally suitable to deal with stochastic disturbances, time variance and interacting systems. These techniques will be demonstrated on automatic transmissions.
Consequently, as more and more critical infrastructure such as aerospace are being embedded with sensing and control and linked to the internet, the resulting security vulnerability can be exploited to inflict systematic damage to the connected physical systems. I shall demonstrate such attacks over a model B747 aircraft and conclude by highlighting the potential of control beyond the boundaries of the discipline like neuroscience.
Unmanned aerial vehicles are seeing an explosive growth in several domains, mostly in the commercial sector. Applications include real-estate, journalism, wild life conservation, precision agriculture, delivery, internet access, infrastructure assessment, etc. Each application has key requirements which must be met in the design of these vehicles.
In this talk, some of the challenges in the air vehicle design for these applications will be presented, including aerodynamics, structures and flight control. A key design challenge is ensuring design flexibility, extensibility, and reconfigurability – while guaranteeing low cost, resource efficiency, reliability, and robustness.
Conventional mind set is to first design the airframe and then address sensing and control architecture, which is often restrictive and inhibits true system level optimization. In this talk a new framework for designing UAVs will presented that integrates aerodynamics, structural properties, sensing and control in a unified framework.
Several finite blocklength converses in information theory have been discovered for several loss criteria using a variety of arguments. What is perhaps unsatisfactory is the absence of a common framework using which converses can be found for any loss criterion. We present a linear programming based framework for obtaining converses for finite blocklength lossy joint source-channel coding problems. The framework applies for any loss criterion, generalizes certain previously known converses, and also extends to multi-terminal settings. The finite blocklength problem is posed equivalently as a nonconvex optimization problem and using a lift-and-project-like method, a close but tractable LP relaxation of this problem is derived. Lower bounds on the original problem are obtained by the construction of feasible points for the dual of this LP relaxation. A particular application of this approach leads to new converses that improve on the converses of Kostina and Verdú for joint source-channel coding and lossy source-coding, and imply the converse of Polyanksiy, Poor and Verdu for channel coding. Another construction leads to a new general converse for finite blocklength joint source-channel coding that shows that the LP is tight for all blocklengths for the “matched setting” of minimization of the expected average bit-wise Hamming distortion of a q-ary uniform source over a q-ary symmetric memoryless channel.
The tightness of the LP relaxation for canonical problems in information theory shows that optimal coding in these problems has an associated “dual” viewpoint: namely, the optimal packing of “source flows” and “channel flows” that are throttled by an error density bottleneck. In the multi-terminal setting, using the language of these flows we derive improvements to converses of Miyake and Kanaya for Slepian-Wolf coding, the converse of Zhou et al for the successive refinement problem and new tight converses for compound and averaged channels. Coincidentally, the recent past has seen a spurt of results on using duality to obtain outer bounds in combinatorial coding theory (including the author’s own nonasymptotic upper bounds for zero-error codes for the deletion channel). We speculate that these and our results hold the promise of a unified, duality-based theory of converses for problems in information theory.
Sustainability and security considerations have lead to an increased deployment of renewable generation in grids all over the world. However, limited control capabilities and uncertainty associated with renewables poses a challenge to the conventional “load-following” operational strategy adopted by the grid operators. Engaging the demand-side in power grid operations is a potential solution to address this challenge. This talk provides an overview of ways in which consumers can participate in grid management and the benefits associated with their participation. The talk also describes technology solutions being developed at IITB to equip consumers with information and decisions support tools needed to facilitate their participation in grid operations.
Cyber-Physical Systems (CPS) are engineered systems resulting from a seamless integration between physical processes and cyber technologies such as communication networks and computational hardware. This tight integration exposes the CPS to a variety of attacks, both on the physical and cyber components, which can result in significant performance degradation. Further, CPS usually consist of multiple agents that collaborate and share information with each other, thus making them vulnerable to privacy breach and leakage of confidential data. This talk will focus on the need, design and analysis of security and privacy mechanisms in CPS.
In the first part of the talk, we will present a security problem for real-time resource- constrained autonomous systems (for example, a UAV), which can reserve only limited computational resources and time for security and control purposes. In such scenarios, the control and security tasks usually compete with each other for limited resources and there exists a trade-off between security and control performance. We characterize the optimal trade-off and identify attack regimes in which the system should prefer control tasks over security tasks, and vice-versa.
The second part will focus on privacy in cooperative dynamical multi-agent CPS. We present a noise adding differentially private mechanism to preserve the privacy of agents’ state over time, and analyze the effect of the privacy mechanism on the system performance. Next, we show that a fundamental trade-off exists between privacy and cooperation level, and it is beneficial for the agents to reduce cooperation if they want to be more private.
Modern world is witnessing rapid changes due to changing demographic, scientific and climatic conditions. This is reflected equally in the energy utility landscape as well. The political and economic dynamics in favor of carbon neutral measures, wide legislations by government bodies for accommodating distributed renewable energy sources is gradually abating the natural monopoly the utilities once used to enjoy; thus opening up new possibilities for the energy management. In a bid to survive and stay afloat in the era of hyper-competition, the utility and transmission system operators (TSOs) are focusing on curtailing the operational expenditures (OPEX), optimizing the capital expenses (CAPEX) and still trying to run the business profitable. The efforts carried out in the process has also to be sustainable so that it does not die during or after major technological or economic reforms.
India stands a special place due to its demographic dividend, quest for power and connectivity and massive technological need to cater to the common mass. To overcome different challenges, indigenous technology push plays a vital role. In this direction several government organizations and initiatives have become instrumental to make socio-economic impact.
At the same time, the emergence of fourth industrial revolution is shifting focus towards digitalization, cloud computing, big data analytics, internet of things and related topics. The lecture will discuss on some of the challenges related to the convergence of power and automation sector. The challenges in this field is enormous and so are the opportunities. The future prospects of connected things in a power system and the role of automation in improving the quality of life in the context of smart cities will be discussed in the lecture.
For people with ALS their eyes are their only link to the world. Eye controlled applications allow them a degree of communication and control and promise a degree of autonomy and dignity. After many decades in labs and niche applications, eye tracking is poised to go mainstream in the next five years and transform lives and society-at-large. It has important applications beyond accessibility. Our eyes are our primary portal to the world. Now imagine if a computer could detect where and what you were looking at all the time. The business applications beyond accessibility, are vast (gaming, education, productivity), and the social implications are somewhat scary.
Artificial Intelligence and deep learning have taken the IT industry by storm. Major IT firms are investing heavily in these areas to gain a strategic advantage. This intense competition has resulted in many fast-paced technology developments, which are being incorporated into a plethora of applications in a variety of domains. This talk will provide an overview of the hype and reality behind the recent advances in AI/deep learning, and outline their potential use in cyber-physical systems.
Deep learning techniques are capable of making sense out of large volumes of disparate sensor data, and (deep) reinforcement learning can deal with complex control and planning scenarios. Therefore, in combination, these techniques can be used to address many challenges related to cyberphysical systems, such as monitoring and controlling large scale deployments, intelligent transportation, and drones.
Cyber-physical systems, which rely on the joint functioning of information and physical systems, are vulnerable to cyber “information attacks” which impact the functioning of the internal physical system. This talk focuses on two broad classes of such attacks: Information extraction – wherein an external observer has access to input and output variables of the system exposed through cyber communication links, infers sensitive information about the internal states of the system and consequently compromises the security of system operation –and False Information Injection – wherein an attacker injects false inputs to the physical system and consequently impairs the system functions.
In this talk, an abstract framework that integrates information theoretic measures into classical stochastic control models is proposed to study fundamental security tradeoffs in such systems. The proposed framework is then used to study some practical problems in cyber-physical security, most notably within the context of energy storage in the smart electricity grid.
As we move away from fossil fuels toward renewable energy sources such as solar and wind, inexpensive energy storage technologies are required. This is so since renewable energy sources, such as solar and wind, are intermittent. An alternative to batteries – which are quite expensive – is “smart loads”, such as air conditioners equipped with computation and communication capability. With appropriate software, the power consumption of air conditioning — and many other loads — can be varied around a baseline. This variation is analogous to the charging and discharging of a battery. Loads equipped with such intelligence have the potential to provide a vast and inexpensive source of energy storage. Two principal challenges in creating a reliable virtual battery from millions of consumer loads include (1) maintaining consumers’ Quality of Service (QoS) within strict bounds, and (2) coordinating the actions of loads with minimal communication to ensure accurate reference tracking by the aggregate.
When the loads in question are residential loads that can either turned on or off, the coordination problem suffers from a combinatorial explosion. This talk describes our work in addressing this challenge by using randomized control, in which control actions are decided probabilistically. A key advantage of this approach is that aggregate behavior of a collection of loads can be approximated by an LTI (linear time invariant) system. Two classes of on/off loads will be considered: deferrable loads, such as water pumps, and thermostatically controlled loads (TCLs) such as air conditioners. The latter is more challenging since the additional randomness introduced by weather and consumer behavior. While the former can be modeled by a finite-space space Markov chain, the latter requires an infinite state space.
Society is rapidly advancing towards autonomous cyber-physical systems (CPS) that interact and collaborate with humans. Examples include semi-autonomous vehicles interacting with drivers and pedestrians, medical robots interacting with doctors and nurses, and many more. The safety-critical nature of these systems requires us to provide strong correctness guarantees on their performance in interaction with humans. However, the combination of intelligence and autonomy in these systems, and their interactions with humans, make them particularly challenging for verification and control.
In this talk, I will discuss our recent work on this topic of safe and interactive autonomy and verified intelligent systems. First, I will describe a learning-based game-theoretic approach to design autonomous systems that are mindful of their effects on humans, and further leverage these effects for better efficiency, coordination, and estimation. Next, I will discuss techniques to systematically verify robustness and safety of such systems. Finally, I will discuss the broader challenges for verified artificial intelligence, and corresponding promising directions to tackle these challenges.
The design of complex engineered systems requires the concerted effort of diverse stakeholders, each responsible for a part or an aspect of the overall problem. Inconsistencies emerge naturally when different parts of a system design are often concurrently modified without regard for their dependencies. Such dependencies range from easy to detect and fix to requiring costly simulations and rework. This presentation introduces modelling language engineering concepts and in particular linguistic and ontological properties as a means to reason about the relationships between different views/engineering disciplines. Subsequently, we explore the link between consistency management and design processes.
Vector Space Models represent words in a high dimensional space where “semantically” similar words are mapped to neighboring points. Such techniques have a long, rich history in the field of Natural Language Processing (NLP), but all methods depend in some way or another on the Distributional Hypothesis, which states that words that appear in the same contexts share semantic meaning. Here we share our experience in one such Predictive model, Word2vec Skip-Gram, a particularly computationally-efficient scheme for learning word embeddings from raw text.
The shallow neural network implementation of the model applied to a large Life Science corpus results in a Semantic Bio-Knowledge Graph of nodes (corresponding to words/phrases) and edge weights determined by metrics such as the Cosine Distance between vector pairs. We explore various properties exhibited by the Semantic Bio-Knowledge graph and highlight early results that suggest novel associations that are incipient from the data (such as between genes/diseases, drugs/genes, genes/genes etc.). In particular, the Temporal Analysis of the graph yields robust predictions of certain associations well ahead of their actual occurrence in the primary literature. We believe that a larger distributed software system, nferX, built on this model can augment knowledge synthesis and hypothesis generation in an era of exponentially growing literature.
Formal modelling is hard and is often difficult to scale for large and complex systems. As part of the AMADEOS project, a tool was designed to facilitate rapid modelling and simulation of system-of-systems (SoS) using a customization of Google Blockly tool. Blockly has been adopted to ease the design of SoS by means of simpler and intuitive user interface; thus requiring minimal technology expertise and support for the SoS designer. This talk will showcase some of the ideas and results of the tool developed for the AMADEOS project.
The rise of Internet of Things and cyber-physical systems create new challenges in interconnecting variety of devices and people for the systems to function in a holistic manner to address issues of efficiency, robustness and resilience of systems. As these systems become increasingly complex, there is a need to revisit system engineering approaches, standards, and semantic interoperability.
This talk will also provide an example of a Smart City project in Downtown Washington, D.C. and the approach taken from both a city management and technology perspective. Further, the talk will address challenges of interoperability by making the case that category theory provides a possible semantic foundation for engineering of such complex interlinked systems.
Modern control engineering applications such as the development of cooperative adaptive cruise control, regulation of large scale systems, e.g., city-wide water or power distribution systems, etc., require synergistic interaction between sensors, communication channels, controllers, actuators. The use of communication network naturally gives rise to several sources of uncertainties: packet dropouts due to unreliabletransmissions, variable transmission delays, etc., thereby degrading system performance with respect to the ideal model, and may even compromise stability, stabilizability, controllability, etc. Networked control systems(NCS) under uncertainties are conveniently abstracted by mathematical models involving switching for the study both qualitative and quantitative behaviours. In this talk we discuss several algorithms for stabilizing switched systems. We employ graph-theoretic tools, and our algorithms haveboth deterministic and probabilistic flavours.
Large scale distributed systems implementing services such as Google’s search or Amazon’s payment services are sophisticated, complex systems with hundreds of thousands of servers, hundreds to thousands of component services, with redundancy across multiple locations across the globe. One of the most critical problems with operating such systems is the need to find a small set of parameters which, if monitored correctly, will provide reliable, actionable information about the health of the system or services as a whole. This talk will focus on using response time distributions as an indicator of overall system health.
This talk will bring forth the burden of hidden hunger both globally and within the country, evaluate the causality with regard to iron deficiency and provide scientific methodologies to assess and develop feasible approaches to impact hidden hunger at the policy and populations levels.
Personal health monitoring systems are emerging as promising solutions to tackle healthcare costs and delivery. There is growing interest within the healthcare community in developing ultra-low power, portable devices that can continuously monitor and process several vital body parameters. In this talk, I will present our work in developing ultra-low power, affordable hardware and software platforms and multi-core architectures for continuous health monitoring. I will present our works on on-chip machine learning and classifier design for detecting cardiac abnormalities and user’s physical and mental wellness states. I will also present applications and case-studies in designing IoT and wearable devices for physical and emotional health monitoring that obtains user’s key physiological signals: ECG, respiration, Impedance Cardiogram (ICG), blood pressure and skin conductance and derives the user’s emotion states as well. Finally, I will present the future directions of work in adaptive on-device machine learning methods and immersive virtual/augmented reality and games that integrates user’s emotions.
There are several far reaching changes underway in power systems. Renewable energy sources such as solar and wind are time-varying. To enhance their usage, demand will need to be adjusted to meet supply, rather than the other way around as is traditional. This raises several issues lying at the confluence of economic behavior and elasticity, demand pooling, implicit or explicit storage, information availability, privacy, adaptation and control. At the same time, the increasing potential deployment of PMUs raises issues concerning how to make the huge amount of data intelligible and visualizable to operators, and to make inferences from the data. On another front, the increasing potential usage of electric vehicles raises issues surrounding how to charge them. I will describe some efforts at understanding these problems.
Increasing the reliability and power quality in electricity grids has been a focus since electricity distribution systems were first created. However, for a variety of technology, economic, and public policy reasons, it may be desirable to work towards utility systems standards that are lower than they are today. A network model of power at the local level can make local generation and storage – and hence local reliability – easier and less expensive to create. This paper summarizes how actual needs for power quality and reliability can be accomplished with new technology options and so lead to shifting substantial future capital investment from the utility grid to locally, within buildings.
The Internet of Things (IoT) is a vision for a ubiquitous society wherein people and ʺThingsʺ are connected in an immersively networked computing environment. The past couple of years have seen a heightened interest in the IoT space transcending industry, academia and government. Estimates on the spread and economic impact of IoT over the next few years are in the neighborhood of 50 billion or more connected ʺThingsʺ with a market exceeding $350 billion. The enterprise, societal and individual benefits touted are significant, with smarter cities and infrastructure, intelligent appliances, and healthier lifestyles in the offering.
This talk will discuss assumptions and challenges on the path of integrating physical “things” into an interconnected system.
Sensor Network has been evolved through many distinct phases in last two decades. It achieved a set of milestones in this process; however some of the initially hyped topics were strayed around and could not do much as they were envisioned. This talk will summarise outcomes of key focus areas of sensor network of last decade. Subsequently, it will throw some light on “Sensor Network in Smart Grid and Energy Management”. Energy consumption can be reduced by using low consumption devices, by identifying energy wastages and by eliminating it. Using some of the deployment in past, the speaker will explain how wastages of electricity have been identified and eliminated by sensor network technologies and achieved results which were not possible otherwise.
Use of AHPCO (Advanced Hydrated Photo Catalytic Reaction) and plasma nanotechnology to develop devices in reducing aeroallergen and microbial contamination during food processing by Nabarun Gosh, West Texas A&M University, 19 June 2015
Present day business world demands newer technologies to build cheap and efficient marketable products. A decade research in aerobiology and biotechnology developed an air purification system that uses Advanced Hydrated Photo Catalytic Oxidation (AHPCO) and plasma nanotechnology to reduce indoor aeroallergen to improve air quality and better food preservation. Air Oasis air purifiers utilize a new generation AHPCO technology that does not rely on filters or air passing through the air purifier. Innovations in technology continue to have massive impact on business and society. Technology is a process and a body of knowledge as much as a collection of artifacts. Biology is no different—and we are just beginning to comprehend the challenges inherent in the next stage of biology as a human technology.
During the past nine years we have been assessing the AHPCO and plasma nanotechnology for net reduction of bacteria, fungi, VOCs with the specific effect on Methicillin resistant Staphylococcus aureus, MRSA. A 5-year research project carried out at the BSA Hospital and Coulter Animal Hospital in Amarillo, Texas evidenced a gradual reduction of airborne bacteria, aeroallergens and VOCs in the indoor air thereby improving the air quality significantly when specialized air purification units were used in the Microbiology and Mycology Laboratories of the BSA Hospital.
Evaluations on safety measures showed no side effect on human cell cultures. Indoor aeroallergens such as, airborne fungal spores, airborne bacteria and animal dander reduced significantly on using the air purification units to improve indoor air quality and alleviating breathing ailments.
The omnipresence of indoor lighting makes it an ideal vehicle for pervasive communication with mobile devices. First, I will present a technique that enables interior ambient LED lighting systems to communicate with mobile devices and low-power embedded devices in a manner that is imperceptible to people. This system enables the lights to act as landmarks for these devices to identify their location in indoor environments.
The second solution I will present is an ultrasound ranging-based system. The system consists of time-synchronized ultrasonic transmitters that utilize the audio bandwidth just above the human hearing range. An unmodified mobile phone receives these signals to determine its range from the transmitters and estimates its location. This system secured the first-place in the infrastructure-based category in the Microsoft Indoor Localization Competition held at IPSN 2015.
However, setting up any ranging-based system and mapping the beacons is an arduous and labor-intensive process. I will present some early work in automating the beacon mapping process, and motivate the need to start building representations, tools and frameworks that allows fusion of data from different current and future localization technologies.
The availability and fast development of portable sensors and ubiquitous computing capabilities are providing new opportunities to healthcare for sustainable, personalized and readily accesible medical treatment. Parkinson’s disease (PD) is a degenerative neurological condition with severe symptoms mainly affecting mobility. New neuroscience and clinical approaches are showing evidences that it is possible to cope with some of the symptoms, using the brain plasticity still present to enhance motor learning in PD patients. On these bases, in a EU-funded research project (including eight different research centers) , a portable device has been developed, including MEMS sensors (accelerometers and gyroscopes), real-time data elaboration and real-time feedback restitution to the patient. It can administer rehabilitative exercise in an automatic and reliable manner, with the aim of increasing the quality of life of patients.
Combining simulations with formal analysis can improve the design, verification, and validation processes for embedded and cyberphysical systems. In this talk, I will present an overview of the algorithms we have developed to derive bounded-time formal guarantees for interesting models of cyberphysical systems. The algorithms use numerical simulations and static analysis of models. They are always sound and also complete for robust safety and temporal precedence properties. For large networked models, the annotations can be derived compositionally. I will mention the lessons learned and the outstanding challenges in applying them to verify a parallel landing protocol and a cardiac cell-pacemaker network.