Believe it or not, I am still working on it. I have the same old excuses for slow development: work. kids. sleep. But, on top of that, there are a couple others. I had a hardware failure which was a big obstacle. The source code was all safe, but the vms were lost and had to be rebuilt. And, every once in a while, I come across something that needs to be done, and I just don’t feel like doing it. So, I procrastinate.
But, progress continues. 2 days ago, the major stuff was all working.
– all machines communicate over a loosely coupled bus. It is redundant. There is a hub, but if the hub goes up and down, or if a node goes up and down, the all recover on their own.
– remote install. You can install a node on a remote machine from the admin web site.
– on the fly updates and synchronizations. you can reprovision a node as a different type of node. It will immediately synchronize and restart as necessary
That’s all of the high level cool stuff I was going for. But, yesterday I broke it all. As I have blogged more than once, the biggest challenge is the configuration. A few months ago, I built an API that I liked. Yesterday, I didn’t like it anymore.
The approach I was trying to take is that you define a server type. A server type has certain applications. Each application can then be configured for that server type. Different features enabled or disabled with different settings. But, then per individual server, you can tweak it. The API keeps track of all of that. If you update the server types, all servers of that type are automatically updated.
In principle, I like the idea, but have struggled in the implementation. I decided to simplify it for the time being. When a server is created, it can be assigned to a model. The server will be a copy of that model. You can then tweak it as necessary, and it will be stored as just a server. If you change the model, it will reconcile with all servers based on that model, and will give you the option to reconcile.
So, rather than keep track of the server types and all overrides, it just makes a copy. Then, you can optionally update the copy if the model changes. That’s going to take some work.
Otherwise, looking good. I had several chicken-and-egg scenarios that have finally come full circle. All the hard coded values have been removed and replaced with calls to a configuration class. The applications and servers publish live updates as the do things, and if a user is on the admin site watching that server, the events all display. They are context aware; a task may have children and the all render showing the proper hierarchy. IE: Synchronize: determine files to download, download them, deploy, restart the application, etc. It nests as deep as you need to. The UI doesn’t render it perfectly. Sometimes the messages come in out of order over SignalR. So, you might get a child before you get the parent. Of course that can be handled, but right now I’m just discarding the child. Also, there are sometimes spacing concerns. But, the UI is the least of my concerns. It needs to be functional and do what it needs to do, but I am not a UI guy and have no interest in being one. I will need someone else to do it nicer.
I’m still calling it a “shallow pass”. I’m starting with a lot of minimal work to get all of the major pieces working together, then going back and working on them more thoroughly. The bus is a good example. For a long time, that was single threaded synchronous, which got it functional. Now it’s multi-threaded asynchronous. There was some ambiguity with features and determining their current behavior. That’s being worked on now as I dumb down the configuration. (The reason everything is currently broken is because all of the features are defaulting to STOPPED, including the server level features that need to be enabled. No biggie. It will be fixed soon.)
I have purchased a license for Atlassian Confluence, and I have started documenting essential things, primarily for my own reference at this point. Anyone who knows me knows that I love to document. That’s not being facetious… I really do.
The thing I’m driving for with the app server is that you just turn it on, and it works right out of the box. But, you can also easily modify it any way you need. IE: i you don’t want to use SignalR, then write an implementation that uses something else and drop it in. I you don’t like my bus, write your own. Etc. Every major piece is swappable. (But, I don’t yet have any alternate implementations to prove it, so it’s not a sure thing yet.)
I previously posted about using Service Locator in some places rather than DI. I still don’t regret it. It infinitely simplifies a lot of things, perhaps with some ethical sacrifice. Without it, I’d either have a million FUNC<T>s eveywhere, or a million factory classes that are just wrappers of the container anyway. I would not use it within any pluggable code hosted by the app server, but within the bowels of the app server, it’s is working quite well. I fought the DI fight on this, and I am a huge proponent of DI on every other project. But, with this type of thing, I was fighting with it and complicating things just to say I was doing DI. Maybe at some point I will see a way to do it better and regret the decision, but it’s all internal stuff. It doesn’t affect application developers.
I guess that’s about it. I am over a year past the point where I usually get bored with a project and stop it. I pushed through and got Dev Dash done, and I’m pushing through on this as well. I’m extremely satisfied with the product so far, but there’s a lot more work to do. (I will start by getting it working again.)
Screenshots may follow soon.
If either one of the readers of this blog would like more information, or possibly even work on it (UI, anyone?), you know how to get in touch.