Constructing big data

There are 4 key elements to constructing big data.  They work with my previous post, 3 vectors of big data.

  • Accumulation
  • Abstraction
  • Autonomy
  • Simulation

Accumulation: We are currently in an accumulation phase of big data.  Systems that are by design for a purpose accumulating facts into a predesigned framework.  Some of it is open, some of it is closed.  In the end, disparate data sets that are accumulated throughout organizations must be valued.  Assets that need value place upon them in hard dollars and cents.  It must be shown to C level executives and business leaders that an asset class needs to be established.  All data is now valuable, how it’s used or reused is a corporate strategic advantage in the marketplace.  How it’s shared is a essential for humanity.

Abstraction:  Abstraction is the process of separating ideas from specific instances of those ideas at work.  Think of the current trends with The Internet of Things.  Soon, everything will be enabled.  That is step one.  Abstractions representing units, banks of units, or entire eco-systems of units will empower big data with a sea of information.  This will be heterogeneous to a fault.  Abstraction and interaction models will need to be established.  These abstractions need to start within the endpoints.  Small, node based data sources that contain first level abstraction with it’s physical topology.  This abstraction layer would allow associated objects, such as big data instances, avatars, peer to peer mesh, etc, to functionally connect with base and extended abstractions.  In a 3d world, such as Second Life, we have simulators creating objects and environments.  It’s too computationally expensive.  In real life, we need objects that create a foundation for simulation models not just single instance data streams.  A sea of devices that have core abstraction models, feeding broader associated abstraction landscapes, building platforms for advanced simulations and interactions.  I am not advocating, as many big data advocated do, we only create abstractions within big data engines.  As I pointed out, that is both computationally and resource expensive.  On the contrary, while virtual abstractions are important, I am proposing an architecture that is empowered at its core.  Peer to peer abstractions allowing for big data to have both internal and external models.  This will increase the Signal to Noise value while creating better validation systems.  We have to think outside the box and into every node for the maximum value in diversity.  The more diverse the better off our systems will be.

Autonomy:  Focus comes from from empowering profiles.  Profiles that eventually form interaction with both abstractions and simulations.  I propose that achieving Autonomous Convergence one powerful goal..  So what is Autonomy?  It’s a quality of both machine to machine type transaction, and self directed avatars that move through space and time in order to report back information to it’s physical representation.  A person or system.  As profiles move from simple business data profiles through personal / individual profiles and into avatars we must develop big data models that will empower these avatars to work in coordination with us in the physical world.  Drawing conclusions from advanced simulations run over an abstraction model that ties our physical world to our digital world.  This association will, in the end, generate marketing landscapes that empower relationships between buyers and sellers, workers and employers, resources and requirements.  Avatars are not just anthropomorphic, they are objects of any type that focus data in the realm of autonomy.  Self driving cars would be the most common example.  In my chart, self driving cars are placed just right of center today.  We don’t currently have the abstractions, autonomy, or simulations convergent to move them up and to the right.  To do that, the environment would have to be “smart” and self reporting, while the autonomous vehicle interacted with it through active and concurrent simulations.    Make no mistake the world is going this way.  We need to architect and instrument our systems to more fully to realize this opportunity.

Simulation:  Working along the velocity and time domain, simulation does not begin to express it’s true power until we pass real-time or zero latency systems.  While we can create simulations relative to static data that lives to the far left of accumulation, dynamic self reporting objects that live in the zero-latency space give us unlimited power to check and validate simulations.  This of course will require new algorithms, systems, and networks.  Simulations that know how to use zero latency objects that have tremendous abstraction engines built in will change the way big data operaterates.  With these states and tools focused on a virtual avatar, we create the basic notion of convergence.  These are not simulations based on historical probability as is common today.  They are built on new tools that simulate an avatar moving through real world abstractions time and time again.  These powerful simulators interact with real environments checking each and every option the virtual avatar wishes to explore.  These are estimated for good, better, best returning options to our physical layer counterparts.  Anticipating the corresponding physical activities will be a premium value.   Use cases for this in the marketing space are clear.  It’s no longer guessing what I might do based on historical data, it’s projecting what I am inclined to do based on the environment I might enter.

E.g. I have never been to China before.  I love to shop in open markets.  I am concerned about my health.  My Avatar is scouting out locations in china based on my known travel plans.  It goes through a number of scenarios determining that my concern for health related to the current smog index will prevent me from seeking out my desire in open market places.  Validating that model, the avatar looks to fill my time from sitting in the hotel room and discovers a show that is not currently not on my trip plan but matches much of what I like for entertainment.  It checks my schedule, the venue, and works out the scenario.  Soon enough, I get a report back on my mobile device suggesting a possible change in plans, reviews them with me, and suggested what clothes to pack based on type style and match.  It also notifies me that one of my jackets is due for cleaning, but the local cleaner is backlogged.  I might want to try the hotel cleaner that is far more reliable in China.

It’s not enough to dream of a future where things magically happen.  Magic is only for those who don’t perceive or understand the technology behind it.  We need a new model to work against and achieve for.  By using these 4 quadrants, we start to shape a world with direction.  There is an open standard we can create allowing for these elements to share information and improve our lives.  But if we simply design what’s obviously and in front of us, we will miss the target and rebuild again and again.  I propose these models as a basis for discussion, nothing more.  Where you take it, that is the exciting part.

Spread the word. Share this post!

About the author

1 comment on “Constructing big data”

  1. Pingback: Convergence is a Virtual Reality | GIGO

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: