Fractal

Blog

We Transform Your Apps To Deliver

Better Price / Performance

Distributed MicroApps

Fractal Computing™

True Business Transformation

Requires A Material Change

Fix Your Cloud Costs

Eliminate OpEx Shock

Electric Utility Application

Billing Quality Assurance

Disruptive Technology

Fractal Computing™

Electric Utility Application

Metering Quality Assurance

Using Commodity Hardware

A Software Supercomputer

As more technologists agree traditional IT computing stacks reached end-of-life, there is more interest in delivering a “super computer” that can process data and apps thousands of times faster.

Some call this hyper converged infrastructure.  Others talk about quantum computing.  These are generally hardware choices offering faster processing with smaller footprints.

It seems few were looking at the radical transformation of the existing software stack as the delivery vehicle for disruptive outcomes.

This is the world of Fractal Computing™ architectures and methodologies.

Let’s define disruptive and then, outcomes. Disruptive means at a minimum movement of a decimal point or two – or seven.  Think 10 times or 100 times or even a million times faster, cheaper, slimmer, less energy.

Outcomes are the currency of change so let’s pick a few most can agree would be a good result

  • Current applications are made to run 1,000 to 1 million times faster
  • Storage is reduced 90%
  • Data centers shrink 90% or more measured by footprint and power consumption
  • Apps one expects to take 36 months to build are delivered, in production in a quarter
  • Cloud costs reduced 50% – 90%
  • Oracle, VMware and other software licenses are not needed
  • Overall IT spend reduced 50% while delivering many more new apps per month
  • Such outcomes are delivered risk free, at a fraction of the cost of current technology

Those are pretty attractive outcomes and ones most can agree are not achievable with current technology and would be great if they were.

This is disruption across virtually every aspect of modern compute – not just fast hardware – and from where does it come?

Surprising disruption comes from innovating how large software systems are designed, implemented, and deployed.

A new class of computing – “Fractal Computing” enables these disruptive outcomes.

Fractal Computing innovates simultaneously across:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Micro-services architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

Engineers trained in Fractal Computing become “full system developers” who, as individual developers, replace scores of engineers in a traditional enterprise class software deployment.

Software systems that once took 18 months to 36 months to build with large teams, can now be delivered in a single business quarter, with a few full system developers, at the fraction of the cost.

One of the larger billing systems in the United States is for a Fortune 10 firm, with over 50 million customers, multiple classes of service, and scores of line items for each customer.

This billing system today runs across a collection of data centers so large that, if you put them together, you need a golf cart to traverse it.

Just the billing system costs over $1 billion yearly to operate.

Yet a larger billing system, processing over twice as many customers and service classes runs on this collection of network cables and commodity hardware, in a lab in Austin, Texas – built with, you guessed it, Fractal Computing.

Instead of a billing cycle taking 25 days to compute, it calculates all bills in a couple of minutes.

Instead of needing a golf cart, the distance across the data center is measured with a yardstick.

Instead of paying an army of engineers hundreds of millions of dollars a year to maintain, modify and build new billing apps, this same work is done with a single engineer.

Instead of burning the electric energy needed to power a small town, the power consumption is less than that needed for a Tesla.

Instead of paying Amazon or Microsoft $700 million a year for cloud services in their data centers, the cloud services bill to run this system is close to zero.

Here is an example of what super computing can deliver, today.

Saving big money is always an eye catcher. But hidden in a benefit stack is what you can do with such a technology.

This behemoth billing system now operates without a data center.  Think that through for a minute.  One of the world’s largest billing systems – no data center.  Bills can be delivered, in real time on a phone or edge device. 

A customer can see their bill, for scores of different services, on a phone, the moment they appear.

Super computing is the future but it may not be coming from the corner everyone is watching.

Electric Utility Application

Transformer Monitoring And Analysis

Fractal Computing™

No Need For Data Centers

For a generation we have lived with Moore’s Law: the speed and capability of computers doubles every two years while their cost is cut in half.

There has not, however, been a Moore’s Law in software.

Actually, quite the opposite has happened. Software apps tend to accumulate more functions, used by fewer people until nobody understands why they were created.

To organize increasingly complex apps, layers of software categories emerged for key functions:  middleware, RDBMS, virtual machines, security, graphical interfaces. Each layer created a level of abstraction with its accompanying transaction overhead.

Apps become slower, more expensive to maintain, and business users sometimes waited weeks for even a simple request to be fulfilled.

Management eventually threw up its hands: “It can’t be this screwed up!”  “Do something!”

The Cloud Era represented replacement of the data center, or parts of it, with someone else’s data center. The cloud data center runs pretty much the same tools, the same software as the internal IT shop.

Apps that took days to run in the data center took just as many days in the cloud. Building new apps in the cloud, at best, took 20 percent less time. Costs did not appreciably change. Most of all, nobody gained a significant competitive advantage because the cloud is not a new technology.

The cloud is the same technology in a different place.

Business models are becoming the ultimate competitive weapon.

New business models require very different cost and performance capabilities than today’s data centers (whether they are in the cloud or on-premise) can provide.

New business model innovators have begun experimenting with Fractal Computing.

Fractal Computing is the next step beyond microservices and containers.

Fractal Computing enables entirely new outcomes because it is not limited by the current, obsolete tech stack.

Early Fractal Computing applications produced previously unobtainable outcomes.

For example, a legacy billing app that ran for 93 hours in a data center now runs in less than a minute on a $2,000 commodity hardware platform.

The common theme is speed and dramatic cost reduction without a data center. How can this be?

Fractal Computing delivers a different technology stack than the current Oracle/VMware/middleware stack. A new stack eliminates most I/O wait states, the bane of current technology.

Fractal Computing is by nature massively distributed. The collective power of an inexpensive network of computing devices acts as the data center. The IT data center, consuming between 2 percent - 5 percent of America’s energy can eventually be replaced. Real GREEN progress occurs.

Fractal Computing delivers small light-weight apps at transaction points in a distributed system to dramatically reduce I/O wait states. Orders of magnitude reduction in I/O wait states are possible when replacing large enterprise applications.

Fractal Computing employs data pipeline processing to further reduce I/O wait states.

Using application-specific databases and abstraction models, Fractal Computing reduces abstraction layers in a system. This simplification removes the I/O wait states associated with translation from one representation to another in complex systems.

Fractal Computing takes advantage of the repetitive nature of most core business processes and moves the compute and storage associated with repetitive tasks to be co-located with the task. This can result in immense reductions in system I/O wait states.

These system-wide reductions in I/O wait states are what delivers performance that can be 1,000 – 1,000,000 times faster than data center (or “cloud”) applications.

Fractal Computing applications have been in production for 6 years at firms most of you know.

There is a growing constituency of innovators and early adopters who need to deliver disruptive business models and Fractal Computing is the technology stack that makes it both possible and practical to deploy these business models.

One of the largest potential disruptions enabled by Fractal Computing is the eventual elimination of the corporate data center. New compute models like EDGE, without a centralized data center, are enabled.

As one executive who implemented a Fractal Web Apps for real-time data analysis recently said:

“With Fractal Web Apps, we concluded there is not an application we have or can envision that really needs a data center.”

As the Fractal Computing compute stack continues its entry into the executive toolset, the days of the legacy data center and its associated costs, may well be coming to an end.

Electric Utility Application

Electric Vehicle Identification

Deliver Apps at 10x The Speed

Locality of Logic

Next evolutionary step in enterprise-scale system design and deployment. Distributed microApps™ beyond the cloud.

Fractal Computing enables application designers and implementors to make system-wide optimization for locality -– both Locality of Reference and Locality of Logic.

Locality of Reference is more commonly recognized. It refers to optimizing a distributed system so that within each running process, the process needs to only access data stored locally. By avoiding network requests in the middle of computation loops – the overall speed of the distributed system can be improved significantly. These performance improvements can be multiple decimal orders of magnitude.

Locality of Logic is a term fewer people are familiar with – however, its impact on app development is significant. It shortens the development time of enterprise class applications to a single business quarter.

A "typical" non-trivial enterprise application may have a few hundred different relational database tables. These tables, and the interrelationships among them, are a tangle of object encapsulations resulting in complex application code layers.

No human can simultaneously understand hundreds of data tables and the permutations of their interrelationships. Thus, today’s major corporate applications take years to build and are maintenance nightmares.

Locality of Logic, in Fractal Computing takes a different approach.

Locality of Logic is usually implemented in object encapsulation (object oriented programming). Here a programmer only needs to understand the inner workings of an object in the code that implements the object. When used as a building block in a larger application, programmers need only understand how the objects works “from the outside” rather than its internal implementation.

Object encapsulation benefits do not translate equally well to the system level.

In Fractal Computing, both through the software run-time environment "plumbing" as well as design and implementation methodologies that are encouraged/enforced by the programming framework -- the design of enterprise class applications is done with a vastly reduced number of database schemes.

In Fractal Computing, these encapsulations fall from 100's to typically fewer than 10.

The first order result is SIMPLICITY.

Simplicity brings derivative benefits. One is simplified application code.

Fractal Computing dictates application building block functionality/logic is implemented locally in a few schemes that would be referred to as "stored procedures" in the relational database world.

These “stored procedures” have Locality of Logic; the application logic is co-located with the storage scheme definition and typically requires knowledge of only that scheme or just one or two additional.

While Locality of Reference simplifies by reducing I/O thrashing – Locality of Logic requires simple knowledge of a database scheme instead of knowledge of an intractable number of potential interactions among relational data tables.

Optimizing for Locality of Logic presents multiple data and compute models to the application developer. She can use the data model that most “naturally” fits the part of the application the developer is working on.

Needless complexity in traditional enterprise applications comes from forcing app data into a relational data storage model that may not be the most efficient way to represent the data.

When multiple data models easily co-exist, the application code becomes more intuitive, expressive, and smaller. Application logic building blocks can be embedded in the database scheme, not forced into layers built for computation into a database.

Such simplicity delivered by Locality of Logic makes application layer code significantly easier/faster to write. Empirical results show about a 10x time reduction to develop complex applications.

IT departments can deliver apps the user actually wants rather than apps that force the user to adapt her business to the application’s constraints.

Locality of Reference, which minimizes I/O wait states, can enable each server to do the work of 10 conventional servers.

Locality of Logic, with its simplicity and logic elevated to the system level, can enable each programmer to do the work of 10 conventional programmers who are forced to wade through jungles of hundreds of tables of interaction complexity.

The simultaneous optimization for both Locality of Reference and Locality of Logic can produce a 10-fold increase in resources for corporate IT.

Ten-fold increases in productivity and speed mean digital transformation may finally be a reality instead of a wishful dream.

Electric Utility Application

Meter Aggregation And Analysis

App Specific Optimizations

Software "ASIC"

Electric Utility Application

PV Management

Fractal Computing™

Empowering The Edge

Computing at the edge replicates your data centers (albeit in a smaller form factor).

Companies adopt this strategy in order to process closer to the customer and reduce network latency.  While these “edge” data centers may be physically smaller than centralized data centers, they rely on the same complex and expensive software infrastructure as large data centers.

If one replicates data centers, costs replicate as well.  These data centers at the network edge use VMware, Oracle, security products, and all of the other software infrastructure products associated with data centers – including all of their associated costs.

Such an implementation requires some connection with a central data center, thus continuing network latency as a constraint.

If one runs the same infrastructure as the main data center or cloud, everything else follows – apps are cumbersome, expensive, hard to build and increasingly difficult to maintain.

The firms using this strategy of computing at the edge by replicating data centers at the edge of the network will not achieve sustainable differentiation from their competitors.

Computing at the network edge is NOT the same as edge computing.

Fractal Computing™ architectures and methodologies enables the edge to use a tech stack purpose-built for edge computing – without the need for a data center (either centralized or at the "network edge").

Placing replicated date centers at the network edge enables a company to run perhaps hundreds of database processes simultaneously.

True edge computing, employing a deeply distributed, Fractal architecture, runs tens of thousands to tens of millions of database processes simultaneously.  Such edge computing atomizes conventional software infrastructure into a granularity that makes edge computing a “difference of kind.”

This type of edge computing delivers its promise via a distributed architecture that can utilize a heterogeneous hardware mix of servers, embedded devices, and mobile devices such as tablets and phones.

In Fractal edge computing, the requirement for conventional, legacy data center software infrastructure disappears.

There is no need for Oracle or any other data center commercial DBMS.  Ditto for VMware, conventional data center security products and middleware.

Edge computing can thus deliver real time processing, on virtually any hardware, at the network edge because it is built with an entirely different software technology stack.

The Fractal edge software stack is massively optimized to eliminate I/O wait states. The delay from hitting the enter key to parsing over 100 million records in a query is imperceptible.

The differences between computing at the edge and Fractal edge computing are seen in the business outcomes.

If you are computing at the edge, nothing much changes except some network delays are reduced.

However, with Fractal edge computing, everything changes:

  • Applications run 1,000 to 1 million times faster
  • Apps that used to run in large cloud data centers, now use bare Unix instances that reduce “value add” cloud costs to zero
  • Storage is reduced 80-90% because of the elimination of RDBMS legacy constraints
  • Applications that once took 2-3 years to build from scratch, now take a single business quarter
  • IT costs are reduced 50% while applications are delivered in a fraction of the previous time

While these outcomes are impressive, the real power of edge computing enabled with Fractal Computing comes from what it enables, not what it eliminates.

Legacy batch systems, with 40 year old patched (or even lost) source code can now be rebuilt in weeks and made to run in real time.

New digital applications can be imagined, built, tested, placed in production and in customer’s hands or on their phones in a single business quarter.

Two large corporations, each with intractable legacy systems, can deliver a real time customer experience, via a partnership, to blend one’s products and the other’s distribution capabilities in real time, in a single business quarter.

The outcomes are quite different. Computing at the edge kicks the can down the road where nothing much changes. Fractal edge computing delivers customer intimate apps, in real time, at a fraction of legacy world costs.

Fractal edge computing enables true digital transformation.

Electric Utility Application

Program Measurement And Verification

Enable Servers To Do 10x The Work

Locality of Reference

Microservices, containers, object oriented programming, and building software from reusable pieces make good sense.

Anything that rapidly enables expensive developers to do more with less, is a good thing.

Now, microservices architectures have opened the door to an even more fundamental compute paradigm shift – a Fractal Computing&trade ecosystem; with distributed microApps&trade.

MicroApps architectures are the result of simultaneous innovation across the entire modern development stack:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Fractal Computing™ architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

MicroApps enable developers to build, test, and deploy a microsystem that is a 100 percent functional equivalent of a major application, on one’s laptop.

MicroApps – not simply microservices!

This is the result of Fractal Computing.

Let’s go there for a minute.

Now a developer, working from home, can securely build a fully distributed node for a major production system. That node will be one instance of what will become hundreds or thousands of nodes in the running systems. Every one of those nodes behaves like every other node, albeit with different data.

Why is this important?

Fractal Computing optimizes for “Locality Of Reference.”  For those not computer scientists, that means the data needed for the next CPU instruction is accessed locally and does not have to be fetched remotely (from an Oracle database).

Getting rid of Oracle is the promised land for many CIOs, but the added benefit is creating supercomputer performance on existing hardware.

With locality of reference, application-layer code can run a thousand to a million times faster. In practical terms, every server can now do the work of 10 servers running traditional software. Data centers and cloud resource needs can be 1/10th their previous size.

With Fractal Computing, major apps that used to take 2 years to develop can now go from concept to production in a single business quarter.

Let’s get back to our developer friend, working quietly and happily from her home.

She is building the entire enterprise application.

Every system node behaves like every other and plugs into the Fractal Framework™. Once her system node is built, she can automatically bring up thousands of identical nodes, fully distributed, and automatically distribute and load the corporate data into them – all behind the cherished IT firewall.

She also can run this now-massive application in a completely decentralized configuration without a data center at all.

Fractal Computing is where the compute world is going because it must.

Applications are moving to the customer. They are collecting data across vast networks and disparate locations. Customers need to compute where the data is, and not be forced to send data to centralized locations that concentrate intrusion threats and risks communication delays and failures.

There is no way the world will build mini data centers where the customer exists, thus the need to run super computer speeds on tiny hardware.

This is the world of the fully distributed app and Fractal Computing is its foundational programming technology.

Fractal Computing applications approach security fundamentally differently.

MicroApps assume they are operating on a compromised hostile network. Every microsystem instance has full security infrastructure built in. Microsystems never assume they are behind the “safety” of a corporate firewall. This security paradigm is designed for the age in which we find ourselves.

Adopters of Fractal Computing believe the Black Swans of Coronavirus and societal disruption will continue to visit. More critical resources will work from home and remote locations. Security threats will continue to infect via the misapplication of legacy security to web based, mobile applications.

Today, Fractal Computing is safely and securely bringing super-computer performance applications to where the data (and the developers) reside – not forcing the data to be transported to where the IT shop might be.

This is a prime benefit from harnessing Locality of Reference for massive speed on small hardware.

Electric Utility Application

Demand Forecasting

Evolving Microservices Into microApps™

Delivering Parallel Apps

Microservices, containers, and object-oriented programming are delivering some level of benefit in replacing monolithic apps. Unfortunately, that benefit is not enough to move the needle against 70 percent to 90 percent of all digital transformation failures.

To “transform” a monolithic, legacy app into an agile, slender, responsive set of microservices, requires months of evaluation of every subroutine.

While the result of the transformation may be an agile app, the process of getting there is not much different from writing the monolithic app in the first place.

It does not have to be this way.

Fractal Computing™ architecture is the next step beyond microservices, containers, and object-oriented programming. Instead of delivering a collection of microservices, it delivers a collection of small, fully containerized, independent microApps™.

Its disruptive outcomes are possible because the microApps are freed from the current, legacy underlying tech stack.

Fractal Computing provides a minimalist software stack with persistent storage (database) at the bottom of the stack, distributed processing middleware in the middle of the stack, and GUI widgets at the top of the stack.

The Fractal Computing software stack can run anywhere – from large servers to inexpensive hardware at the network edge. The Fractal Computing software stack minimizes I/O which enables microApps to run at near silicon speed.

Fractal Computing technology consists of simultaneous innovations in:

  • Distributed processing,
  • Database architecture,
  • Stream processing,
  • Object oriented programming,
  • Fractal Web™ architecture,
  • Full stack development frameworks (at macro and micro scale), and
  • Compiler design.

Building microApps, with Fractal Computing is an entirely different experience from rewriting/recoding a legacy app.

Many legacy apps have evolved into code thorn beds that do not provide what the business needs. Many firms, particularly with their Customer Care and Billing Systems, have adapted their business to the constraints of their legacy apps because the apps do not natively match the firm’s business processes.

Such monolithic apps are so onerous, so dangerous to change, little if any innovation can take place in their boundaries. Just think of it:  the most important customer-touching systems cannot be touched without fear of disaster.

Fractal Computing offers a very different alternative. A Fractal microApp™ can be developed and delivered at full production scale in a single business quarter.

In Fractal Computing, a single engineer builds a microApp which has the core functionality required for billing, or check processing, or customer care management, or whatever the required business functionality might be. The microApp is developed as a single node (running on the engineer’s desktop) and then, with the push of a button, is replicated across all nodes in the system and with aggregate access to the full production scale data collection.

Software systems that once took 18 months to 36 months to build with large teams, can now be delivered in a single business quarter, with a few full system developers, at a fraction of the cost.

No one has to touch or modify the fragile, scary legacy Customer Care and Billing System.

There are now two Customer Care and Billing Systems running in parallel. Now comes the fun part.

With two systems running in parallel, you now have two systems independently calculating customer bills. If the systems agree, there is certainty the bill is correct. If they disagree, you have a flagged a problem BEFORE the customer sees the bill.

A parallel Customer Care and Billing System can be the QA system for new billing or customer care features. The new parallel system can become the access point for the major accounts sales team since, because of its processing speed, it has real-time capabilities that are not possible with the legacy system.

The parallel Customer Care and Billing System can run in parallel forever as a QA system – or, after 6 months or a year, with every transaction being tested and reconciled, it can replace the onerous (and expensive!) legacy system.

Fractal Computing frees corporations from paying the never-ending “budget tax” of Oracle, VMware, and other legacy technologies.

Electric Utility Application

Key Customer Identification

Parallel Apps Require

Continuous Real-Time ETL

Electric Utility Application

Key Customer Insight

Fractal Computing™

Transformation Behind The Corporate Firewall

The Fractal Computing™ architecture enable entirely new business models from its massively disruptive outcomes. One of those is edge computing, that is edge without the need for a central data center.

Current computing at the network edge is too often about location. The technology, delivered by current hardware vendors, remains the same.

With a Fractal Computing architecture and methodology, edge computing becomes an entirely new distributed processing software technology eliminating expensive legacy software taxes like Oracle, VMware, etc. used to develop and deploy large production systems.

With such edge computing there is no need for data centers, in the cloud or on premise.

Companies understand the cloud is not nearly as secure as their internal network. If they forget, the CapitalOne CISO mid-career implosion is there to remind them.

Eliminating the data center does not mean eliminating the corporation’s infrastructure of security, governance, and best practices. For the CIO, eliminating much of the data center needs with Fractal Computing means dramatically reducing the 66 percent of IT budgets typically spent on maintenance of expensive, and now unnecessary, legacy software such as Oracle, VMware, etc.

For the CIO, eliminating much of the data center via Fractal Computing means freeing the CIO from being a purchasing agent hamstrung by legacy vendors who do not believe the CIO has other options.

Operating behind the corporate firewall in the midst of the company’s governance and security infrastructure, Fractal Computing apps deliver transformative benefits to corporate application portfolios.

One early adopter used Fractal Computing technology to reimplement their billing system as a distributed processing application. The new billing system went from inception to first production release in a single business quarter. The company has been running both the legacy and new billing system in parallel for over a year – using the new Fractal Computing billing system as a quality check on their legacy system – and has been able to virtually eliminate the 4% billing error rate that they had been struggling with in their legacy system.

Fractal Computing provides a lightweight software stack that can run on almost any hardware platform (down to the smartphone in your pocket). Each software instance is a self-contained system that stores data, runs application code, requests data from other instances, responds to data and compute requests, and serves up web-based user interfaces both to interactive (human) users as well as other software instances.

Fractal Computing delivers inherently loosely coupled distributed processing apps with built-in tools and frameworks for scaling and managing a deployed network of compute software instances. For the application developer and the business domain expert, Fractal Computing enables them to develop and test functionality on small bite-sized, easy to understand, subsets of data and then easily integrate and scale the collection of system building blocks. Prototyping major system features can be done in hours and days instead of weeks and months. Similarly, moving from prototype to production scale occurs in days instead of months.

Fractal Computing applications are developed by full-stack software developers. In these applications, each software instance is a full-featured “system” containing database technologies, distributed processing middleware, application logic, and full-featured web servers that expose and manage web services API’s and HTML-based user interfaces. Features and capabilities can be developed and tested on a single software instance and then easily deployed throughout the distributed processing network.

Fractal Computing enables the CIO to finally have the tools their IT organization needs, for fast responsive development, behind the corporate firewall, in a safe, secure, governance-driven environment that delivers the price-performance transformation that “the cloud” has proven unable to provide.

Freeing the corporation from the tyranny of most legacy data center costs via Fractal Computing means the CIO can, finally, be a true transformation agent for the enterprise.

Electric Utility Application

Rate Assignment And Validation

Migrating Legacy Apsps

Forecasting App Example

LegacyFractal App
18 MonthsDeployment Time18 Hours
15 - 20 PeopleDevelopment Resources1 Programmer
5 TBStorage Needs90% Reduction
10 Hours, Large Data CenterApp Run TimeLess than 1 minute
45 DaysTime To Add New FeatureDaily
Oracle, VMware, Large Server ClusterRequired Infrastructure1 Intel NUCS = $4,000
Not PossibleTime Needed to Move To CloudApp runs on any cloud or data center

Electric Utility Application

Rate Design And Simulation

Parallel Apps Enable

Continuous 100% Coverage Testing

Parallel System Verification as an alternative to traditional testing.

November 05, 2020
Authors:
Brian Bernknopf – QA Consultants
Michael Cation – Fractal

Often, the “long pole in the tent” in traditional software testing is the duration it takes to properly set up test data and establish a scenario to be tested. Often, subtle but important changes to the code base and application under test are not able to be tested until later in the customer or workflow lifecycle, often transpiring over logical date changes, triggering backend jobs, etc. This is an intensive and repeated process that while often automated, still increases the “Time to Test” measure from the moment code and environments are delivered until the specific test cases can be executed.

Fractal Computing™ architectures and methodologies, which are modern approachs to software development focused on performance and simplicity and have multiple industry uses beyond QA, uniquely provides an alternative use for the creation of parallel test systems.

This approach, currently in use in the energy utility markets allows for rapid validation of system changes without the need for arduous and complex test cycles and scenarios, but rather, a completed end to end validation of core system processes with mathematical proof points of system correctness.

100% Test Coverage

With Fractal Computing™

A New Testing Paradigm

When QA Consultants beings a system QA assessment, we are often looking for the right balance between cost of quality and speed to market. Of course, doing that without a sacrifice to quality beyond the allowable risk levels of a particular system. When applicable, if it is a high transactional system or high volume processing product, we are also evaluating if a new emerging technique in QA, “Parallel System Verification”, is applicable. This process allows us to mathematically ‘prove’ that a system is performing as expected. Not by running hypothetical test scenarios, but by running real data, in high volume, with real expected results.

The higher the volume, the better the math.

In short, if two systems with the same data input produce the same results, the is high probability they are performing correctly.

Certainly, the system under test must be 100% separated from the parallel test system. And further, that parallel test system must also be validated against a core data set to be accurate.

Using new technologies such as Fractal Computing™ architectures and methodologies allows us to rapidly stand up a parallel system to deliver 100%, continuous test coverage as a viable alternative.

We are currently seeing that continuous testing with 100% test coverage has delivered significant benefits. Among these is the ability to essentially eliminate transactional production errors. Another benefit is a dramatic cost reduction in ongoing QA costs while increasing the surface area for testing critical new app features.

  1. The Power of Parallel Execution: Continuous Parallel Testing and Coverage

One of the primary advantages of using Fractal Computing in the development of enterprise software is its ability to rapidly process significant amounts of data. Our software partners use this feature to run entire days, months, or years of input data through the system under test, as a parallel system. Here, two systems, with different code bases, are performing what should be identical tasks. The independent QA as a Service vendor, QA Consultants, measures every transaction from both systems to determine, line by line if they are identical.

This is a two-step process that first validates that the parallel application accurately matches the existing legacy system. Through this process multiple years of data are run and reconciled for correctness.

Once the parallel Fractal application is validated, the second step is to make the suggested software changes to both the legacy system and the parallel Fractal system. Then the data sets are re-run looking for expected and unexpected outcomes. The outcome differences are analyzed and confirmed for defects.

Our experience has shown a 100% accuracy rate for the Fractal parallel system in identifying production defects when there is a mismatch between the two systems.

When calculations from two independent systems, each with different code bases, reach the same conclusion, the probability that the final product – i.e. a utility bill -- is correct, approaches 100%.

The volume of data allows for mathematical certainty of system sameness.

  1. Eliminate Testing Through Validation: Testing a System of Record as New Features are Added

As enhancements to the system of record become available, the Fractal Computing approach allows the QA team to build them into a version of the parallel system.

Because Fractal technology enables complex applications to be modified, configured, and delivered rapidly, it is practical to construct parallel systems for each set of feature enhancements. With this approach, every line item is reconciled for every calculation in real time. The system of record can be checked against an established parallel system and also reconciled against one or more additional parallel systems to ensure that all of them agree.

This process is continuous. The parallel systems can be validated 24/7. Across years of historical data.

When an anomaly appears, there is immediate feedback. As each new system is validated and then moved over to the role of “primary parallel system”, it is possible to introduce significant updates into a production system and test them at the level of every line item, on every bill or other output, for every customer for 100% test coverage of the output of the updated system.

Not 100% hypothetical coverage, 100% historical coverage. Complicate “edge” scenarios or complex processes are all accounted for. There is ZERO QA time spent on the creation of test data.

Further, the comparison of expected results to actual results also takes place in a Fractal Computing comparison solution allowing for rapid validation.

Since Fractal Computing applications are more efficient than traditional applications, the hardware requirements are minimal. This makes it economical to run multiple systems in parallel without breaking the program budget. It also makes it possible to do extensive retrospective testing – for example, running three years of transactions through each of the parallel systems on a regular basis to find anomalies.

It also opens the possibilities up for fraud detection and hypothetical scenario modeling.


  1. Summary

Fractal systems for enterprise applications have been in production for 7 years. They have been used for 100% continuous testing with great success.

We have seized this technology and created a new way to test and validate transactional oriented applications on an ongoing\continuous basis to deliver a higher level of quality than what has been previously possible and at a speed that allows for rapid system deployments with low risk.

QA Consultants and Fractal believe that this unique approach to quality, validation, and velocity is a significant mindset change for IT organizations, and we further believe it is ground-breaking. It’s current use means that we think there would be some relevance in considering this approach as new systems come online or legacy systems go through refresh and modernization.

There are many options for ETL validation as well as traditional system maintenance and upgrades.

Electric Utility Application

Service Level Agreement Monitoring

Edge Computing

Fractal Computing™

Electric Utility Application

Customer Contact Management

Development's Dirty Secret

Nobody Knows Their Data

Movement to the cloud accelerated the desire to rebuild applications with lower costs, apps more responsive and competitive benefits. It is not happening.

There are several reasons a reported 70% of application transformations fail.

One is that the cloud is just someone else’s data center using the same technology stack the apps previously used. The much bigger issue is the world of relational and even non-SQL databases has disconnected people from understanding their data.

Let’s take an example.

A recent conversion with a multi-billion dollar company centered on the 330 relational tables they used to calculate customers’ bills. No human can comprehend 330 tables with any understanding of what to do next. Why were there 330 tables?

The billing equation looks like this: there is a customer, who uses a service, which has a unit price, and that unit price times the number of units is added to a bill, with some taxes and sent out. How hard is that?

Sure, there may have been a million or so customers. So what? Why are there 330 tables? Why scores of joins? Why a struggle for primary and secondary keys?

The dirty little secret, which is the key to application non-transformation today, is that the vast majority of those tables are required simply because a relational, index driven data structure is not the optimal way to manipulate data for bill calculations.

The tools being used by current IT and cloud providers are 40 years old. A lot has changed in 40 years, thus no wonder we live in a world of non-transformation.

Fractal Computing imagines data differently. Let’s go there.

First, Fractal Computing does not care about the code for that current billing system. No code review, no spreadsheets or diagrams. Fractal Computing wants to see only data – input data, all of it, and output data, all of it.

Try this exercise:

  • Instead of 330 tables, create three.
  • One bucket is the data about the customer. Everything known about the customer, even from the marketing department spreadsheets from the last reach-out event goes in one location.
  • A second bucket contains what that customer does. She uses her phone, consumes minutes, is charged an agreed-upon price depending on time of use or other rules.
  • The third bucket is what the company wants to do about it.
  • The firm applies the price, business rules about rebates or specials, puts them on a bill, adds them up, adds tax and sends a bill.

Imagine for a minute all the data for a massively complex billing system is structured in this way.  Anyone can wrap their heads around 3 data locations. Complexity is gone; transformation becomes not only possible but predictable.

Fractal Computing expects that most data comes from somewhere else. It has a sophisticated ETL function to smoothly accept data in multiple formats, from many locations, process it and emit the results.

For 40 years software vendors, particularly those from the database world, believed their data stores were the center of the computing universe. These vendors continue to hold many IT shops hostage to painful maintenance fees for essentially obsolete technology. It may be obsolete, unable to support transformation, but it is the infrastructure of the modern enterprise.

These vendors’ self-interest is in complexity, expensive licenses and onerous tech people support. They like those 330 tables and want to build even more indices spanning them, eating machine resources.

There is another way.

Fractal Computing frees firms from the need for relational databases and their attendant overhead. For the first time in a generation, perhaps two, a company can actually understand their data in those 3 tables.

Even the business exec understands: the customer, what they do, what do we want to do about it? Three data locations.

Transformations fail because the dirty little secret is nobody can understand their data; it is arbitrarily scattered in hundreds of tables no human can comprehend. There is no economical transforming with current technology stacks.

Fractal Computing delivers the most complex apps in a single business quarter because one finally understands one’s data.

Electric Utility Application

Customer Data Portal

Innovate Without Risking Legacy Apps

Run Parallel Systems

Electric Utility Application

Data API

End Of An Era

Software Is About To Change

Moore’s Law means hardware gets about twice as fast every 24 months as chips are able to have smaller footprints. We are reaching the end of silicon-based efficiency because the laws of physics mean the etching on chips cannot get all that much smaller.

Some say the age of quantum computing lies ahead. Probably for some scientific apps or weather forecasting, but for business computing, quantum computers are not going to be a big seller any time soon. Applications would have to be rewritten in code for which there are few if any programmers. Quantum computers are not the next big thing.

Highly efficient, optimized software stacks are the future. And that future is at hand!

Software, particularly the DevOps movement, has been the lazy uncle in this equation for over a generation. Software has not had a major architectural innovation impacting speed or efficiency – ever. Software tagged along with faster hardware, cheaper processors hiding the inefficiency of its bloat.

A typical tech stack from the hardware up has a data management layer, middleware, virtualization, security, app code, a user interface. Each layer is general purpose. That means every layer has every conceivable feature, 95% of which no one customer needs. But they remain and must be supported.

Each layer introduces I/O wait states. Every I/O wait state means the CPU is wasting time not doing anything productive; the application is flailing. A flailing app eats up energy and compute resources, with no productive business outcome.

Software engineers once delivered purpose-built apps based on knowledge of how a CPU works. They wrote applications that pipelined data in its most efficient form.

Today’s software is too expensive to license, too difficult to maintain and too slow to offer anything close to business agility.

Then there are the secondary inefficient software costs: data centers sprawl to cover entire city blocks and consume what is estimated to be 3% - 5% of the energy grid.

Eliminating or reducing the data center footprint is now the largest energy-saving opportunity for a major corporation. As the need to conserve power increases, firms will pay more attention to lowering the energy footprint of those data centers.

The best option at hand to reduce those costs and save energy, is to dramatically increase the efficiency of software.

That efficiency, measured in elimination of I/O wait states thus optimizing the processors, is fast becoming the domain of Fractal™ architectures.

Fractal Computing is the next step up from microservices. A microservice makes code reusable, cuts programming time, takes advantage of containers, yet runs with pretty much the same efficiency as the technology it replaced.

Fractal Computing delivers a microApp™.

Each microApp has, built in, the full tech stack it needs to operate. The data management, middleware, security and even GUI are purpose-built for that microApp.

If the microApp needs to manage cell call billing, its data management understands how cell phones generate transactions. If it needs to manage an electric meter with unique types of data feed, that comes as part of the data management layer.

The result is efficient use of the machine processor. Reducing I/O wait states makes a microApp run 1,000 to a million times faster. That’s a pretty good result if you are looking for transformation.

Fractal Computing is a difference of kind instead of just degree.

The microApp costs 1/10 the cost of an app using traditional technology. It uses 85% less storage. It can be built from scratch in a quarter.

One of the most common ways to introduce Fractal Computing is to build the parallel app for a QA process.

One can take a typical billing system, build a parallel app in a quarter, watch it run 1,000 times faster, use it as a QA checker for every transaction, then after a month, year or several years eliminate the legacy app and reap the benefits.

Fractal Computing offers a means of dramatically reducing data center footprints by replacing entire rooms of servers with a single IBM Z Platform or a wall of inexpensive Intel NUCS.

Energy reduction, greenhouse gas reduction, cost reduction, and increased business agility – these are all readily achievable with increased software efficiency provided by Fractal Computing.

Electric Utility Application

Weather Data

"More of the Same" Isn't Good Enough

Real Transformation

Electric Utility Application

Customer Care And Billing

Facts You Didn't Know About

Fractal Computing

When Your COBOL Developer is...

Feeding Pigeons

"Jack, we have 120 critical apps with little or no source code? What do we do if one breaks?"

"Those we patch in binary. I think there may be some old guys in the park who can still do binary!"

There are hundreds of critical business apps running today with no or little source code available. They were written in COBOL or another ancient language, usually on a mainframe, and the developer is long gone, in a park somewhere feeding the pigeons.

Or there were multiple acquisitions of that insufferable billing system and now 8 billing systems are making up the whole and 3 or 4 of them have little or no source.

The issue is how to meet the compliance issues today when that app breaks or cannot be proven to do what it is meant to do. And if it is a financial app, the risk just went way up.

The typical answer is to bring in a small army of consultants, do app reviews, look at reports, find flow charts if they are around, which they are not. Then there are months of requirements definitions and then start the process of an app rewrite.

And they will charge the poor customer a million, perhaps many millions to rewrite the app.

That process takes 18 - 36 months and when done has added ZERO value to the business. It runs pretty much the same as it did before.

Is that a good investment?

Not in the day of Fractal Computing™.

Fractal Computing is the next step in microservices and object-oriented programming. It enables those gnarly legacy apps, with little or no source code, to be rewritten in a quarter, using only the data feeds, input and output, as the ingredients.

Fractal Computing not only delivers these apps in record time, they arrive in containers, run 1,000 to a million times faster, use 90% less storage, and can be updated using the most common, inexpensive skills.

Virtually all corporate applications, especially those ancient COBOL ones, are made needlessly complex because of the infrastructure in which they had to perform.

Current application software is about 90% or more about dealing with the complex infrastructure of VMware, security products, reporting products, Oracle, non-SQL databases, and the thousands of related products.

Each of these is a general-purpose system. Some are worlds unto themselves. Each need highly paid technical experts to keep them running. But the dirty little secret was they were not needed in their entirety to run the app.

Fractal Computing replaces the obsolete, overly complex current tech stack with a sliver of a tech stack customized for that specific app. There is a database, middleware, graphical user interface purpose built for that one application. All the extra junk is tossed out. You do not pay for what you do not use. You do not need tech talent for what is not there. You do not need to write complicated code for something that does not exist.

When Fractal Computing encountered some of these really old guys they were not surprised. They immediately understood the implications of delivering not a microservice, but a fully containerized microApp™.

They said, "wow, you guys are now writing code the way we did it 50 years ago. Welcome to the best way to get an app done!"

These original coders were not gifted an O/S, a DBMS, or a virtualization layer. They had to build apps that solved a problem the quickest way possible without them. They built skinny apps, that ran fast, used minimal storage, and optimized hardware.

Fractal Computing is a way to convert the most complex legacy apps, in a quarter, run them in parallel with the current system, and later, discard it.

Sometimes, those guys feeding the pigeons look like they were pretty smart 40 years ago.

Fractal Computing

Key Concepts

Real Transformation

Beyond The Cloud

Cloud Native vs Fractal Computing™

A Comparison

Legacy Cloud NativeFractal App
Microservices enable apps to run 10% fasterSpeedFractal Apps run 1,000 - 1 million times faster
New apps delivered 15% - 25% faster (DevOps)AgilityAny apps, built from scratch into production in a quarter with 10x faster DevOps
Hours per monthDowntimeLess than 30 seconds per year
Apps use cloud specific services and must reprogram to rehostVendor Lock InAny app, on any cloud, any time, no code change. Same app code runs on all clouds. Clouds become commodity.
Transformation limited to faster server provisioning. Apps run the same.TransformationEvery app runs 1,000 times faster, use 90% less storage, built in a quarter
The cloud is just as expensive as the data centerCost50% - 90% reduction in cloud and data center costs
Apps rewritten in containersContainerizationApps rewritten in containers
MicroservicesArchitectureMicro Apps
New apps in monthsRespond to MarketsNew apps in hours or days
Not possibleFractal DevelopmentBuild fractal app for 25,000 customers. Test, add data for billions of customers with no app changes.
Zero impactReduces I/O Wait StatesVirtually eliminated
RequiredRequires Oracle, VMwareEliminate need for most software licenses

The Cloud Is Just...

A Data Center

Same Code

On All Platforms / Devices

Edge Computing

Done Correctly

Edge Computing vs. Computing At The Edge

Different Outcomes!

Fractal

We Transform Your Apps To Deliver

Better Price / Performance

Beyond The Cloud

Fractal Computing