How (NOT) to Write Software
A companion article to “The Death of Virtuosity,” We explore what makes Sentia different and better than any other producer through virtuosity.
Normally we talk about healthcare and health insurance in this space, but today is a little different. Last week we introduced “The Death of Virtuosity" and detailed reasons why generally people just don’t think; opinion and marketing seem to be the dominant forces in this society. Henry Ford famously said “There are no big problems, there are just lots of little problems.” That means that breaking any challenge down into manageable chunks is the way to produce a solution. Today we are going to detail several examples of how we actually solve the hard problems. This is not meant as an advertisement for Sentia, but rather an enlightening experience designed to demonstrate that virtuosity is not quite dead yet.
One way to produce superior solutions is to solve the general problem. For example, there are over 130 specialties in medicine. All of the big Electronic Medical Record (EMR) companies cater to these specialties. They don’t solve the general problem. The mistake here is that building software for a specialty means that to solve the entire problem, you need a new piece of software for each specialty.
Here lies madness.
Solving this problem 130 times makes it all but insurmountable with 130 analyst teams deciding what to build, 130 project management teams deciding who works on what and how the work is reported, 130 test teams finding and documenting programming mistakes, and finally 130 developers doing whatever it is that they do to get their job done. We say whatever it is because developers are notorious for taking off on the next new thing and just putting it in and not telling anyone.
This is a lot of work and a lot of monkey motion, but what is actually wrong with it? It is wholly unmaintainable and non-viable, that is what is wrong with it. Nobody can manage 130 projects of that size and keep any kind of standards based coding. It just can’t be done. Also, it is expensive. Think of the tens of thousands of employees, the big buildings, the infrastructure, the millions of dollars in just air-conditioning. Finally, think of the millions of lines of code that have either been typed or more likely copied and pasted, since these little applications all do the same thing. When a bug is found, you have to inform all 130 teams and fix the exact same problem 130 times, probably with 130 different solutions. It is quite a bit worse than if Ford designed and built a new engine for each car and truck they produce. They only actually produce about 15 models.
Here lies madness, indeed.
The solution, of course, is to solve the general problem. The problem is documenting the patient encounter. Imagine rolling up to your dealership in your brand new Ford we talked about earlier and they tell you they have an engine documentation system and a transmission documentation system, and a suspension documentation system and on and on, 130 times. You would go buy a chevy so fast it would leave tire tracks out of the Ford store, and rightly so. The solution, then, is to build ONE EMR that is sufficient to document any patient encounter. It really should be pretty simple. The patient says this, the doc observes and/or tests that, the doc comes up with a diagnosis and executes a treatment and the patient goes home.
Simple.
This is predicated on the fact that we have a universal nomenclature suitable for documenting the entire patient encounter. We do and have since the 60s before all this EMR stuff got started. Left as an exercise for you, dear reader, is to research the Universal Medical Language System (UMLS) and Specifically the Systematic Nomenclature of Medical Terms and Clinical Terms (SNOMED_CT) and figure out for yourself what everyone but me and thee are doing wrong.
Not too long ago we were tasked by a client with coming up with a scheduling system for a Gastro group. They had a scheduling system of sorts but it was built in the paradigm above where it only applied to that one practice and no others. When they went to install it elsewhere, it took months of modifications to make it limp along. We were to replace this system. Our goal became to schedule Whitesnake at Reunion Arena in 1985. Yes, that has nothing to do with the Gastro practice, but if we solve the concert problem we solve the Gastro practice problem at the same time. So we thought about People and Groups of People and Venues, and Equipment and Ancillary things like Ticket Takers and Janitors. All these things had to come together at the same place at the same time with the appropriate permissions. You don’t assign the band to a seat. You don’t assign a ticket taker to the stage. You can’t leave guitars and amps on the tour bus. We solved that problem in a data driven way. Now all you have to do is put the correct information in the database and this scheduling system can schedule anything, even Whitesnake, but certainly your colonoscopy.
A couple of years after we produced this Universal EMR, it occurred to me that we could add a couple of tables to the database to represent the policy and the intersection of policy and procedure (called oddly enough, Policy and PolicyProcedure) that would automate the entirety of the insurance industry. The Doctor or practitioner documents a covered procedure and we send payment for the performance of that procedure in real time. This eliminates medical coding, pre-authorization, insurance networks, adjudication, delays, denials, rate negotiation, salespeople/brokers/agents, the cost of a third party EMR, skyscrapers in every major city in the US, and the hundreds of thousands of employees that work at the insurance company that you, as the insured, pay for, and cuts more than half from the cost of health insurance.
Talk about solving the general problem.
We will discuss Architecture, both General and Specific, but let’s talk about automation for a minute. Sentia’s entire reason for being is not to sell software, medical or otherwise, it is to find a better way to do anything and execute that better way. If we do that the money will take care of itself.
We automate processes. We have even automated the production of new software. We have a code generation tool, that I personally wrote, that generates all of the architecture and about half of the code for a new application. If we can automate what WE do we can automate what YOU do.
Currently, the code generator spits out .NET 8 and TSQL 2022 code. These are both the latest stable releases of their genre. Soon Microsoft will move to .NET 10 and I will update the generator to produce that. The changes are incremental, it shouldn’t take me more than a few days. The point is, however, that nobody does this and this is the only way to herd the cats that are developers into building what you told them to build instead of the latest and greatest widget they found on the internet that may or may not be useful, but are against coding standards.
Let’s say that again: this is the only viable way to produce reliable, scalable, secure, performant software. Notice the period. Everything goes in the same place in the same way and with the same code. There are no aberrations, no cowboy coders, no mistakes. Anyone that doesn’t do it this way is doing it wrong, and as far as I know, we are the only ones doing it this way.
Think about it this way. Before the American Revolution, muskets were made by hand one at a time. In Kentucky in the mid 18th century, an automated way to produce rifles was invented using modular, easily replaceable parts. No longer did you have to find a gunsmith to fix your Kentucky Hunting Rifle, you just took out the broken part and plugged in a new one. This is why we don’t sing “God Save the King” in America anymore, and this is exactly why what Sentia does is different from anything anyone else does today.
None of that even mentioned the fact that the automation means we are about half done on the first day of development. That saves our clients time and money.
General Architecture is the study of how the big pieces fit together: Let’s dive in and take a look at why we need all these layers.
The Master Data Management (MDM) layer is designed to store data that is of use Enterprise wide. It houses people and companies and geography, emails and phone numbers and all the things that a suite of applications would have to duplicate otherwise. This is different because all your prepackaged applications all have their own copies of this information, meaning that you need to either spend millions integrating them, or type in each data point multiple times.
The Single Sign On (SSO) is the repository of all things Authentication and Authorization (Auth). Authentication is the process of making sure you are who you say you are, authorization is the process of making sure you are authorized to see what you are trying to look at and do what you are trying to do. With prepackaged applications, you either have to spend millions integrating some kind of universal Auth application like SAML or Okta or Identity Server, or have your employees waste time signing into each prepackaged application individually.
Each of the applications we provide has hooks into at least three API applications: the MDM, SSO and whatever the target application is, maybe the scheduler we talked about earlier. The SSO, MDM and Target Application all have their own, about 90% generated APIs. These APIs are simply for getting data to and from the database. They aren’t really information until they are pulled together and given some context. This is where the Backend For Frontend (BFF) pattern helps. Phil Calçado at SoundCloud came up with the BFF pattern to solve the problem of not only disparate databases and external data sources, but the fact that different screens need different data models. A cell phone can’t display as much information as a 75” monitor. We use this BFF pattern to coalesce the vapors of the MDM, SSO and Target Application data into a viable and logical comprehension. We can also incorporate external data sources like the Texas A&M Geoservices Address Checking API to validate entered addresses and make sure they are valid.
The UI is art and art is subjective. We don’t generate any part of the UI since computers cannot generate art. This leaves us wide open to make the UI look, feel and behave exactly the way the client wants and make sense of the underlying complexity of what exactly we are doing. We don’t use JavaScript generally. JavaScript was a bad idea, badly executed 30 years ago and ran its course about half a decade ago. Yes, you need to to directly manipulate the document object model, but we don’t need to do that often, and that is one reason for the BFF: screen size. We use Microsoft’s new Blazor Web Assembly functionality to write C#, the same kind of code the rest of the application is built in, directly to the browser, compiled and fast.
By doing things this way we can separate concerns, and avoid duplicating data and avoid having to update changes in multiple places. This approach also allows us to integrate disparate external data sources and use them as native. Finally, we can generate most of the code for these layers.
Database architecture is kind of an art and that is an art that I don't see anyone getting right. We probably have a bit of an obsession with the normal forms. If you need a primer on normal forms, look up Boyce and Codd. They wrote their doctoral theses about this very subject.
To that we add the best practice of only using stored procedures, little chunks of code that control what a database does, to interact with the database. Most programmers will do anything to avoid using a database and writing database code. This is why we have NHibernate and Entity Framework and everything else that attempts to manipulate the database without thought from the developers.
The benefit of this is increased security, and the ability to include other things in the procedure like writing to an audit schema. More on that in a few minutes.
This is how the APIs themselves are made. This is fairly technical so we will be brief, just a statement about what it is we are doing and why it is different and better.
There are just a few things we do with data: Create, Read, Update, Delete and Search (CRUDS). By making database procedures and matching application methods for these operations we are far ahead of the game. There is a little caveat here. Updating database tables individually is not a best practice. However, it does work and gives us a way to automatically produce the code that does it with no thought and no expensive development required. If there is a performance issue identified, then we can go back and start combining these individual calls. This means that good enough is good enough. When good enough isn’t good enough we know exactly where to look at what to do BEFORE we go spending a lot of the client’s hard earned cash on things that don’t affect outcomes.
The API layer is everything between the Database and the BFF. This layer consists of four projects
The core layer consists of interfaces and patterns exclusively.
Interfaces are the contracts between senders and receivers. Interfaces can be added to, but not updated or deleted. This ensures that the receiver isn’t blindsided by a method that worked yesterday and does not today. As an aside we version all of our APIs for this same reason. We include Interfaces for our Business Logic and Repository projects to guarantee these don’t change for the client and break their applications.
Data Transfer Objects (DTOs) are the objects filled in the repository layer that are returned from the database as recordsets. These are a pattern that is used to ship data to the business logic layer where they are translated in the Models that are shipped to the UI.
Models are the things that get transported to the UI layer. These are transformed from the DTOs in the Business Logic Layer. We will discuss why this abstraction is important in the subsequent business Logic Layer description.
The Repository project contains all the logic to send and receive data from the database and to translate those returned datasets into Data Transfer Objects that the subsequent layers can understand. This level of abstraction is necessary to separate and encapsulate all the database functionality in one place making it fast and easy to maintain and/or extend.
The Business Logic Project has multiple responsibilities. Mainly and from the generated code, we translate the DTOs into the Models that get sent to the UI for display. This abstraction is necessary to absorb and mitigate database changes in a thoughtful and fault tolerant manner. With this layer we could even switch from PostgreSQL or MySQL to SQL Server in the background and with this built in ‘shock absorber’ to compensate.
The Business Logic Project is also where we can pull in external data like the Texas A&M geoservices mentioned above and use it as native. Until we need to make these modifications, however, the Business Logic project is 100% generated.
The API project is used to give an external entry point into the API Layer, The API layer is simply a broker to orchestrate the rest of the projects in this layer.
Many modern applications use a JavaScript Web Token (JWT) to identify the authenticated user and his or her Session. We do the same. Where we extend this functionality is to apply it to the database as well. We generate and save a Globally Unique Identifier (GUID) in the JWT and the user must pass the JWT with this GUID back to us for every request. We then take this GUID and transport it back to the database. That is the part nobody else does. We authenticate the passed GUID to the stored GUID to identify the user and tell us what data they can see. More on that in multitenancy. Again, this is unique in the industry and with the exclusive use of stored procedures, absolutely critical to keep the database and data secure.
Multitenancy is storing multiple clients in the same database. That means you won’t have multiple “folders” or applications you are storing clients in and won’t have to worry about making changes to one client application that aren’t replicated to the others. That is all already done.
We take the GUID discussed earlier and relate the user associated with it back to the company they are associated with. Then we filter by company to give us only the things the user is authorized to see. This is like many things we do, unique in the industry. There are generally three ways multitenancy works, database based, or having a separate database for each client, schema based or having a separate schema for each client and these are not really multitenancy. The third, Table based, requires a company identifier in each table and is not really efficient so we reworked it into Relational Multitenancy, something that we built to solve the problem better. Do the research and find out if you like.
We have the ability to add triggers to the generated code that writes all database operations to a separate Audit Schema. That gives you the ability to see the entire life cycle for any record in the database. You can see the current data. You can see the old changed or deleted data. You can restore changed or deleted data. You know when the change occurred, you know who made the change. Nobody else has this ability. Usually you get an “UpdatedDate” and an “UpdatedBy” column in the database and that loses the old data and subsequent updates.
We partnered with Ola Hallengren to provide database backups, integrity checks and reindexing scripts and automation to keep your databases running in their best configuration. This is not something that only we do, but we are thinking about the just-in-case scenarios.
We also use Kubernetes to orchestrate and check resiliency on our containers. A malfunctioning container will automatically regenerate itself, and you will probably never even notice.
With the exception of Docker and Kubernetes (K8s) for much of our infrastructure, we don’t use open source software. Open source is generally free and worth every penny. Even if you find something that is designed and built the way you would design and build it, and you won’t, they are one malicious user with a malicious code check-in away from having a back door installed on your new software.
So we don’t do it.
We do use Docker for making containers and K8s for orchestrating, load balancing and high availability to avoid Windows licensing fees. This is almost as effective as Windows high availability and far, FAR less expensive. This also makes upgrades and licensing painless and in some cases, you won’t even know a failover has occurred.
We have demonstrated several ways we are different and better than any other development or consultation service.
These are all things that nobody else does or that nobody else does better.
Also, the big consulting companies don’t really want to write custom software anymore. They just aren’t bright enough to get that accomplished. What they do is to sell you millions in prepackaged software, Salesforce, ServiceNow, Workday and SAP or other equivalents, and then charge you more millions to modify the prepackaged software, more millions to integrate the prepackaged software, more millions to build an infrastructure for it all to run on, and finally, more millions to have some kind of helpdesk when it all goes south. You don't know how this all is built, how it works, how it is secured, how you are going to integrate it and neither do the big consulting firms. They have the same MBA that you do.
Partner with a company that does know how to write custom software and have the entire problem solved in one fell swoop. That way you get exactly what you need, no more and no less and you spend a fraction of the piecemeal solution that big consulting firms charge.
If you liked what you read, please like and subscribe, click on the notification icon, subscribe to our newsletter, and follow us on all our social media and blog sites.
Date Written | Comment By | Comment |
---|