Blogs, Posts and Press Releases

AsterionDB’s CEO Steve Guilford publishes thought leadership article in IDG’s TECH(Talk) Online Community

The following is a reprint of an article that first appeared in IDG’s TECH(Talk) online community forum.

Chapter 1

Maybe we’ve been going about this the wrong way. What if the answer to many of our technology and cybersecurity challenges was literally right under our noses? After all, transformation is frequently the result of taking on a new perspective.

Would a new approach that does away with long standing assumptions bring about a shift in terms of efficiency, security, software development, cost-of-ownership and operational aspects? If this peaks your curiosity, join us in a five-part series that will appear in IDG TECH(Talk) as we explore and discuss the implications of an architectural approach that I’ve been thinking about for quite some time.

Some of these concepts may be new, some familiar. I encourage all to participate in the discussions at the end of each article where you can ask questions and share your unique perspectives.

What’s The Problem?

TL;DR: Too many resources in the file system and the middle-tier.

Have you checked the news? Chances are another cybersecurity breach has been announced. If not, one soon will. Looking at most cybersecurity incidents we find the file system has either played a role in the propagation or it was the target of the attack – usually both. The file system was never designed for security and yet here we are, 60 years later, placing it literally at the bottom of our technology stack.

Is there any wonder why things are so out of whack? The file system was designed to look like a file cabinet making it easy for people to transition from printed records to digital information. The nomenclature even derives from file cabinets with files in folders, folders in folders, etc. By the way, have you looked at the file-system icon? It’s usually a picture of a file cabinet. I rest my case.

But wait, it gets better. Not only are we keeping most of our data in the file system but we also keep our programs there too. So, that means we have to have a secure environment in which to run those programs. That environment, of course, is based upon the file system.

I think you can see where I’m going with this. Until we deal with the legacy file system we will have a hard time ensuring architectural system security.

What do you see as the root of the problem?

How Did We Get Here?

TL;DR: Databases were not up to the task in the 90’s.

We’re going to spend a little time here simply because if we don’t understand how we got into this mess, we’ll never figure how to get out.

In the beginning, everything was a mainframe. Computers were so expensive in the 50’s and 60’s that only the most mission critical operations were cost effective to computerize. Think of airline reservation systems, banking applications, government and defense; things that had to run 24 hours a day, 7 days a week in order for a business to operate or to guarantee mission success. IBM, a company we all know of, was and still is the dominant player in the mainframe market.

In the 70’s we saw the emergence of the mini-computer. Mini-comptuers, often borrowing from mainframe design principles, brought the price-entry point down and opened up wider markets for computerization. Early business programming languages started to emerge, most notably COBOL, which became the dominant COmmon Business Oriented Language (sic).

The 70’s also saw the first practical implementation of computer networks and client-server architectures. One of the early companies to capitalize upon the client-server architecture was Wang Labratories. An Wang1, an IBM veteran, guided his company to become an early leader in word processing systems that specifically leveraged the client-server paradigm.

In the 80’s, the micro-computer arrived with the Intel 8088 and Motorola 68008 chips, IBM PCs, the Macintosh and even the Commodore64! We also saw the emergence of the relational database.

At the close of the decade languages were entering a first phase of maturity with the wide availability of C, Visual Basic, COBOL and Java. Networking of computers was also becoming more common with advanced connection protocols that allowed for the adoption of client-server architectures, remote procedure calls (RPC2), centralized databases and shared, network file storage.

But, trouble was brewing on the horizon.

Growing out of a DARPA project, infused with inspiration from brainy followers of the Grateful Dead at Stanford and MIT3, the world wide web was upon us by 1995. Anybody with half a sense of technology could see that networked computers were the future and there was no stopping its advance.

The problem was that computer systems and design paradigms in use at the time could not handle the increase in end users – especially when viewed in a purely client-server model. It soon became apparent that a middle-tier layer would be required. Just think, if you had to reach out and install software on every end-user computer to run your application, you’d go bonkers and probably be out of business. Furthermore, changes at the data layer were slow to implement, in part because that’s where all of your mission critical data was. Those systems had to evolve slowly in order to satisfy mission critical requirements without sacrificing stability.

The middle-tier allowed us to spread out the computing resources and focus expensive data-layer technologies on mission critical structured data. The middle-tier, with web-servers as a basis, also allowed us to centralize and manage the distribution of client-layer application logic in the form of web pages and early web applications.

People thought at the time wouldn’t it be great if we could move all of those files into the database. In fact, by the early 90’s many RDBM systems supported blob data but the theory didn’t quite pan out in practice. Databases were great at managing structured data but unstructured data – not so much. In fact, there was a lot of ‘friction’ involved in working with unstructured data in the database. Quite frankly, it was just a whole lot easier to store that stuff in the file system and with networks, sharing the data was easy too.

So, with a little bailing wire, bubble gum and an Apache Server, we were able to patch together an architecture that supports numerous end users with a manageable framework that minimized end-user configuration requirements and centralized many administrative aspects.

All of the preceding, lengthy as it is, really just comes to the conclusion that since databases couldn’t handle unstructured data and complex logic we just piled it all into the file system. But, the file system was never designed for security. It was designed so you and I could organize information on a floppy.

What do you see as some of the steps along the way that got us here?

What is a Mainframe Architecture?

TL;DR: Smart terminals hooked up to network controllers that talk to the ‘big-box’.

First off, when I write of the mainframe architecture, I’m referring to the software and hardware aspects of the design. Operational aspects such as 24/7 availability and hot swapping are not part of this discussion.

The mainframe architecture basically consisted of a big-box where all of the data and logic lived, network controllers and smart terminals. The smart terminals, an IBM-32704 being an example, were capable of executing rudimentary user-interface logic such as field validation, making things blink and so forth. What’s important here is that the smart terminal packaged everything up into a neat transaction and sent it to the network controller. The network controller had nothing to do other than add some info and send the transaction on to the big-box. That’s because the network controller did not have any business logic or data resources on it.

So, here’s the simple question: what would happen if we migrate all of our business logic and user data out of the middle-tier and down to the data-layer?

What is a Modern Mainframe Architecture?

TL;DR: Browsers connected to an elastic security isolation layer with all resources (logic and data) residing at the data layer.

Finally, we can get down to what exactly is a modern mainframe architecture? Simply put, if we push all of the business logic and data resources down to the data-layer we can turn the middle-tier (e.g. web-servers, load-balancers) into an elastic security isolation layer with its only responsibility being the forwarding of transactions from the client-layer.

This leads to a wide array of questions however. Some of which are:

  • How would we build such a system?
  • What are the security benefits?
  • What are the operational benefits?
  • What are the development and integration benefits?
  • Why hasn’t it been done before?

We will explore these questions and discuss other conceptual aspects in forthcoming articles in this series. Until then, I encourage you to join in an extended discussion centered around The Modern Mainframe Architecture.

1https://en.wikipedia.org/wiki/Wang_Laboratories

2https://en.wikipedia.org/wiki/Remote_procedure_call

3https://www.wired.com/2015/07/grateful-dead-fare-thee-well-tech-pioneers/

4https://en.wikipedia.org/wiki/IBM_3270