Wednesday 30 April 2014

Electromagnetic Pulse and Solar storms are equally catastrophic

Solar Flare August 31, 2012 (creative commons licensed: Wikipedia)

Electromagnetic Pulse (EMP) and Coronal Mass Ejection (CME's) events have concerned Public Safety and Security officials since the invention of the atomic bomb in World War II and the discovery of one of its side effects that are as damaging as the atomic blast itself, if detonated in space in the 1960's. CME's were first discovered and recorded by Richard Carrington in 1859. EMP of such magnitude would be extremely difficult to activate without prior detection. NNEMP's (non-nuclear) still require vast amounts of Chemical explosive. Further, such damage would only affected a single city, not entire counties, States or National scale, assuming it had sophisticated electronic navigation. There are very few nations that have cruise missile technology that could deliver such a payload due to chemical weight and shear size required. Nuclear EMP's are only valuable as a wide area weapon if detonated in at high altitude, which again, very few nations are capable of doing and all are known to western / NATO nations. Rogue nations are not known to have the capability to launch a device to the required 50 to 200 mile altitude (plus down range to target area, e.g. North Korea to a region over Central North America, approximately 7,500 miles) for a high altitude nuclear explosion (HANE) / High Altitude Electro-Magnetic Pulse (HEMP) 

The EMP N-Club is very small, all are well known and fully understand Mutually Assured Destruction (MAD) doctrine, thus I find that possibility extremely remote. N-EMP's are still a highly complex device if it is to work as intended. You can't just build one of these things by going on-line at Willy E. Coyote's ACME store. Dirty half baked devices will not trigger a wide area EMP magnetic pulse. This is particularly true of any devices that are atmospherically detonated (as in the case of Hiroshima and Nagasaki) where the pulses were absorbed over a very small radius of less than 10 km's and those were more powerful than an improvised device is likely ever to be. Cruise missile technology is also a very complex platform. It is not just a matter of putting a gyro-based control unit with navigation platform tied into a GPS smart phone and enabling it to fly on automatic pilot.

There is a certain level of electronic safety from an EMP (and CME's) if known in advance (missile launch alert, etc. e.g. assuming North Korea had same performance as U.S. technology, the time to North America is approximately 30 - 35 minutes minimum, if not longer, + 30 to 45 minutes for fueling), which in turn is sufficient time to order - enable an emergency shut down of electrical grid prior to detonation. There are no documents that (nor should there be) that identify the length of time such an order would take to implement. But if a shutdown was ordered, a fair estimate from the time of a POTUS Executive order would be approximately 3 to 7 minutes or less and another 5 - 10 minutes to literately, pull the plug. 

Today, we are more likely to see temporary disruption of services from our Sun caused by solar flares and geomagnetic storms that are more powerful (and more frequent) than any potential NNEMP event. Such events are known as CME's, or Coronal Mass Ejections

Data and transaction records would be lost during that period as large scale data center shutdowns would exceed that time frame pending size and scale of the facility. Over the past 20 years, during times of electricity outages caused by blackouts, financial institutions have recorded very few records lost or cancelled, approximately less than 0.5%. Most Tier 1 banks and Government Treasury / financial institution Data Centers (but not your local ATM or bank branch) are CME / EMP hardened worldwide including most stock exchanges. This is only partially true, because if power is supplied by standard commercial service back up power generators, vulnerability may still exist. Experts agree, there is a clear and present danger to any section of the grid when CME's are exposed to high voltage (e.g. 600 volt) transformers operating. I am not aware of any testing on any of the types of transformers used in the North American Grid in a non-powered (charged) state and how magnetic pulses or Coronal Mass Ejections demonstrate levels of survival. But NASA has been working on the problem, creating an alert system to notify Grid Operators called Solar Shield.

It is true, not every electrically powered device, even when shut off is fully insulated from potential CME / EMP threat, the majority of equipment would be safe if they are turned off or not powered at time of an event. In my opinion, EMP events are highly unlikely while CME's do happen, sometimes with little advance warning.  NASA published a YouTube video of a CME that narrowly missed earth in 2012


Monday 28 April 2014

Open Data during a disaster. Is your city ready?

Illustrated example of information flow, pyramid diagram of

City of Redland in Queensland, Australia. Disaster Management Arrangements

Information overload is a complaint that is often heard by Crisis and Disaster management agencies. Our daily lives depend on data. It is the engine that powers most aspects of our lives. We live in a digital world that would collapse without computers, data centers and personal access to trillions of zero's and ones. It is a fact, the harvesting and management of data is a valuable tool in crisis and disaster response. The public may not understand or recognize how sensitive the collection and use of data is an the influence  during a response. It is often argued, data management and access to it, in all its forms is crushing response resources during a catastrophic event. Emergency management agencies are under immense stress digesting, interpreting, visualizing and acting on the thousands of streams of metadata now available. There are three important categories of data housed in silos and most commonly used; static, dynamic and (fluid) real-time. Information sharing is carried out at various government levels and agencies. Australia's public safety agencies distribute information very well (see image and link above) and continue to improve in the consumption and distribution of information. It is critical data services be used internally and published externally to the public, from the local to national level in potentially saving lives during a disaster. The City of Redland's in Queensland Australia is an example where it has committed resources and planning cycles to ensure all stakeholders and infrastructure domains are involved. Redland's has experience that touches three common disaster types, floods, cyclones and brush fires.

Static data is the compilation of information that does not change. These sources include fixed infrastructure that is often plotted or contained in a database that can be downloaded and layered for subscription by trusted and open domains. Dynamic data is near real-time data, which can include social media and scientific or commercial sensors, primarily focused on the results of events occurring. Fluid real-time data include sources from people, mobile infrastructure, support services and sensors such as media sources, volunteers, video, imagery and public social media that are prepared or injected in response to the event. Some experts will argue these definitions and have validity. It all depends on your perspective and focal point.

The question many are asking is, how much data is sufficient, required, and usable. Analysis begins; what data can be ignored, discarded or buffered for potential future use? Ultimately, the data is going to be available, regardless of consumption, usage and value. There is a growing trend suggesting many agencies have hit a brick wall and are beginning to push back on what sources should be subscribed too. Mirroring this conundrum is the lack of data. This is particularly true of static and dynamic information. Bare essentials are still considered missing by many stakeholders.  These essential data points  are considered critical if the fusion of technology and data is to be an effective tool and improve preparedness and response outcomes. Arkansas and Oklahoma's recent tornado's are testimonials where data is often third hand or of little value in disaster response on the front lines because immediate access to core data is wiped out. But that does not tell the whole story, data is still collected and supports recovery operations long after the disaster's impact.

Where advanced technology platforms and systems are in production, data availability is not an issue. The integration of static source feeds are being merged into large clusters for specific use in the prevention, mitigation, preparedness and recovery from a disaster event. We are beginning implement data sources from narrowly defined stove pipes into multiple data feeds for coordinated planning, situation awareness and response activity that simultaneously reused for public consumption in a effective and transparent environment. City and State government planning and oversight agencies are making data available for uses never originally envisioned or intended. Combined with Civil engineering departments, a complete picture is potentially created with a level of detail that penetrates valuable information with an element of accuracy never witnessed or manipulated before. This capability to monitor, dissect and act, has created hyper speed decision making demands. Crisis response planners have never had it so good. They can simulate hundreds of risk models, recording what activities are required for policy planning and decision. This evidence is a testament to Open and Big data advocates exploring valuable opportunities available to make a difference. However, what one city or state can implement, should not imply that other cities in circumstances beyond such capability, that it is immediately necessary at every level of detail described.

Disaster response agencies are not always lucky, even with the most advanced systems are in place. Post event data defines next steps are possible. Data inputs are often limited or offline. Time becomes the enemy in many circumstances. No two events replicate conditions simulated resulting in response plans that inevitably deviate from the playbook. Prepare for the unexpected is mandatory. Fluid and Dynamic data availability is often misguiding, bringing with it a continuous cycle of resource consumption, limiting their ability to act. Analyst are front and center of this imposing environment. Conflicts often erupt at this level of disaster response management. Not all crisis or disaster events face this consequence. During a pandemic, the ability to pause and collect enhanced data enables a comprehensive response that often pinpoints where an identified crisis response is required. It is clear that the type of event impacts the value information in the type of events that occur.

There is the scenario where extreme conditions exist, the lack and parallel overflow of sources of data in conditions where local resources cannot subscribe. Third parties inject themselves enabling alternative hypothetical outcomes, creating real time models provided from multiple source data streams and archives that are not validated or necessarily accepted by authorities and response agencies.

Data can be managed using a variety of techniques when entering a Emergency Operations Center. (EOC, In Redland, they have created a Disaster Hub focused on four key areas; prevention, preparedness, response and recovery or PPRR). Many countries are developing minimum standards and guidelines on how best capture, construct and consume data for effective use during a crisis or disaster. There are experimental techniques tested in research laboratories that show significant promise, particularly when dealing with mitigation, resiliency and preparedness requirements. We are beginning to see improved sensor technology driving new modelling techniques to determine human behavior during and after a disaster occurs, facilitating how best respond with resources. Where a substantial event has occurred, covering a wide region of territory, sensor data provided from multiple scientific sources is now understood and becoming effective in how best execute a response.  In past events, the velocity of these streams of data has been at untenable transaction rates.

In the past in a Observe, Orient, Decide and Act (OODA) loop, the placement of advanced information has been to cluster (centralize) data, then distribute it in a hub and spoke architecture to subscribers. It has proven to be an effective method in delivering response services from within a single command organization responsible for the coordination and delivery of aid. It becomes challenged when there are multiple organizations 'leading' a response. It is also an area of heated debate as it applies to information sharing, accuracy, usage, and delivery time frame. If one takes the position of what their mandate is, then the OODA loop process never breaks. It simply becomes one more source of data to the organizations centralized cluster and carries on about their mission. But this not what is occurring in large scale disasters in all spheres of response. There are hundreds of technical and governance issues that surround the problem. The problem is not one of obtaining information, as it is in its interpretation, usage and desired course of actions that often become halted. Inclusion is another issue that is creating friction. Who is and is not in the cluster or a subscriber of data distribution.

The result is multiple clusters of overlapping and independent information metadata spanning across hundreds of different platforms and devices with little or no quality control or precision in its use. Additional issues arise between stakeholders on the use and management of the data. We continue to witness the separation of stakeholders as if they are enemies working together under a condition of a cease fire. This applies to communities such as NGO's, Government Agencies (civil and military) and the technology community (open source and proprietary). Application Program Interfaces (API's) are offered with few if any accepting them with open arms.

Some experts will argue that the problem lies in the OODA loop structure within an organizations culture suggesting that legacy stovepipes still exist and will always be difficult to break down. Others point to the problem of information quality and lack of resources to verify and act. Is open and closed data becoming the resistance point to change? How many systems and structures of data are actually required? Do we need to simplify or continue to parse data for specific and targeted users. This has become an arena where it very easy to boil the ocean a few million times.

Complicating matters is the fact that we have more solutions to a problem than are needed or required. We continue to have cultural, economic and regulatory issues that are not easily overcome. And we continue to observe, re-observe the same lessons hundreds if  not thousands of times. How many maps or flood monitoring applications are really required? Is there a single solution or not? Maybe. Disaster data management is not coupled or powered by a monopoly, yet its leadership is structured to be one.

It starts at the top with policy and management protocol. It ends with the reconstruction of how information is processed, analyzed and distributed. With respect to the process of consuming data, the quality of a transaction is solely based on the capabilities and experience of its users. Centralizing data for coordination and its usage, is the right architecture design. Data is multifarious that is often on a collision course with groups that serve as moderator or facilitator. Algorithms and filtering are only one small part of the equation. Even with right sized staffing requirements, the delivery and to whom it is transmitted to, is where it quickly becomes the area of indecision and stakeholder vulnerability. The volume of data is important and is calculated. When is enough data conclusive to act upon is not under threat, but it is analyzed for response level proposals for command decision. Disaster response organizations have improved their intelligence capabilities, understanding information far better than in years past. No longer are agencies sending excessive quantities of the wrong relief supplies in error. Where these errors still occur, the majority of them are generated from human interaction (estimates) derived from limited capabilities to calculate needs and not tabulated information from multiple sources or locations.

Everything from system compatibility to metadata layer specifications create another round of moderation and filtering. This problem is being addressed through the use of triage teams consisting of experts with more than one domain expertise. Volunteer Technology Teams (VTC's) routinely use this method using crowd sourcing, sending out requests to solve multiple problems feed through one or multiple streams of information, blending them together for further analysis and decision by another cluster or group, that is also crowd sourced. Government and military organized Disaster and  Emergency Management response teams do not do this as well inside their organization if faced with more than one type of disaster event simultaneously. A snow storm is manageable (for some at least...) because all the stakeholders know their territory and assets available and where vulnerabilities exist. But when faced with a hurricane, flooding and snow storm all at once, as in Hurricane Sandy, that spurred fires, multiple injuries, power outages and infrastructure failures, the system undergoes immense stress and loads that reach the breaking point, and in some eyes, did fail. Getting information was not the problem. Understanding it and using it, became unwieldy and difficult to determine next steps. What was left after the storm hit only compounded the problems experienced.

Response command and control requires enhanced information to ascertain situation awareness and conditions. It is not always about speed. Leadership has learned over the past several large scale disasters that it is often better to review and engage experts and stakeholders multiple times before a decision is made on how to best act rather than activating a command decision immediately after an event occurs. This has created multiple OODA triage clusters to emerge, processed and undertaken within the Command OODA environment. Data is not only clustered and consumed for analysis and verification, but quickly handed off to other teams for impact analysis on domains outside their own for consultation before a top level OODA cycle is completed. A feedback loop is generated, created across multiple information platforms that enhance and improve articulation of potential outcome scenarios before big picture decisions are made.

Splitting apart data for extended partner interpretation has its critics, namely response latency and how trusted is the distribution and its compilation. Such injections of multiparty analysis into the cluster have been known to be prioritized based on agenda and narrowly defined parameters that are not fully understood by the core. But there are solutions to these problems. At the tactical operating level, it is clear to many, that hoarding or consuming all data from all open sources will not be effective during a disaster. At the first responder level, compartmentalized data is valid. Field operation headquarters is where the dynamic networks and distribution points converge. Implementing a shared environment that is collaborative and easy to interface with open and close community groups is invaluable for not only field service delivery, but redistribution to senior leadership. The information and data collected (and distributed) at this level supports the OODA loop, but often lack sufficient resources to fully take advantage of its potential. The Federal Emergency Management Agency (FEMA) has begun to address this problem by deploying Innovation teams into the field, developing new and open methods to collect, sort and share data important to a local community when supporting a large area of responsibility.

Senior leaders operate using a simple framework. Surround yourself with people and experts that are smarter than you are, whom are trusted to offer recommendations. These teams recognize and use cooperatives and trust models within their direct and indirect teams that are built through the consensus of facts from all sources available. This is true in how data should be collected, analyzed and distributed. Multiple domains need to share and build common tools that are standardized and built for multiple uses instead of just one or two. The use of data should subscribe to static, dynamic and fluid metadata streams and be managed through the same process. it will require a change in mindset and how Emergency management is operated. In my book Constructive Convergence: Imagery and Humanitarian Assistance, I quoted U.S. Space Command, Major General John Hawley (ret) - "Imagery is like fish - best when it's fresh". The same can be said of information data. Subscribe to it untainted and fresh to multiple analysis groups for interpretation to regroup and form individual and combined evaluations.

No matter how small or big your community is, inter-operable and usable data is an important part in crisis and disaster management. By clearly identifying local risks and how to prepare for them, a community can improve its response capabilities and services. Just ask the city council of Redland. They have made sure everyone has a role to play in sharing and leveraging information.