QCON – Conclusions

November 23, 2009

This was a great conference.

I went to OOPSLA 2007. There, all the rage was SOA. Now, all the rage is CLOUD COMPUTING and DSL. I also had a lot of exposure to DDD, but I’m not clear if its because I just happened to run into it, or if its as big as the other two.

As a person who is always looking for something cool to innovate, cloud computing is discouraging and is forcing me to rethink my current priorities. I have been working on my own ESB for a few months now, very sporadically. I think I can build a very light weight, very practical and usable bus. I think you can just double click a few things and be up and running. It includes a very dynamic client to eliminate all of the heavy lifting on the client side.

But now I’m starting to think that it doesn’t matter. Who needs a light weight little bus when you can go to the cloud and get everything? I’m trying to convince myself that maybe everyone won’t go to the cloud and maybe… just maybe… someone would like to use my little piece of software. To compound it, I’m also trying to convince myself that its something good to do even if no one uses it as a learning experience. But, to play devil’s advocate with myself, wouldn’t it be better to learn it a different, more practical appreciated way? I’m really torn about it.

DSL was another big concept that I learned  a lot about. But, its not new. All a DSL is is a language that describes something you want to do that gets loaded and executed. Its could be a script. XSLT is a DSL. Of course, a lot of that has been formalized into types of DSLs and approaches, which is very useful. It gives names to concepts there by making them real things.

I’ve had a project on the backburner for some time now. A friend needed a webapp to do specific things for his specific purpose. I built the webapp to fulfill his need exactly. Now that its been live for a few months, I’d like to rewrite it better so that maybe I can get others to use it.

I have now a bunch of new information to consider

I decided months ago that I wasn’t going to just write a webapp. Its been done. I considered using Sharepoint as a platform, but that costs some coin. So, my last thought was to use Dot Net Nuke. That’ll give them an entire CMS on top of it. (I never bind things together, though. If someone decided they wanted to add it to their existing app, I’ll make sure that can happen).

Now, I’m thinking that I should build it using Azure. But, once in Azure, do I use its Sql databases, or do I take the plunge and go RDMS free? I should take the plunge.

Let’s pretend that’s settled. The next thing to put in there would be a DSL to describe everything. That’ll give me an opportunity to exercise a bunch of new concepts. But, there’s one thing I’m already conflicted about: DSLs have been described more than once as “small and concise to your particular domain”. Whenever someone tries to come up with a “silver bullet” implementation, I instinctively shake my head. It doesn’t exist. You can’t predict and accommodate everything that everyone is going to want to do. The best you can do is build a great extensible platform. If it doesn’t do exactly what you need, then extend it the way you need to. In this regard, I consider myself educated. I always build pluggable stuff. I believe that I’m very practical about this type of thing, and my successes support it. I think the same concepts could port to a DSL. A DSL can span multiple models, probably successfully, as long as the DSL is extensible. My brain has been churning for days on a DSL to support this particular side project. But, suppose some of that ends up being right… how does that interact with Azure? Will I beable to do all that stuff up there? (If I rely on DYNAMIC, then I guess not.) There’s a lot to ponder.

Furthermore, we’re dealing with a lot of these same issues on my new team. It looks like I’ll be tackling it from multiple angles.

Conclusion of the Conclusion

I learned a lot at QCON. Now I have to figure out what to do with it all. Do I scrap current projects, or just put them on hold?

DSL and CLOUD COMPUTING are the new buzzwords that I’m excited about. I’m still excited about SOA too, though. I have a hard time letting go.

QCON Conference Recap – Friday

November 23, 2009

“Keynote: Next Generation Service Orientation: The Grid, The Cloud and the Bus” – David Chappell

This one was tough. It wasn’t an epic fail like one of the other key notes, but it was boring. I ended up zoning out.

Primarily, it was a sales pitch for the Oracle SOA platform. As a co-worker stated it “and it wasn’t a good sales pitch”.

It looks extremely robust and configurable, but is ugly and seemingly very complex. I guess configuring an entire enterprise’s service layer is never going to be easy, but maybe it should look easy. It didn’t.

“The 7 Fundamentals of Mission-Critical Service Testing” – Robert D Schneider

At the beginning of every talk, I lookup the speaker and add their blog to Google Reader. In this case, I couldn’t. As luck would have it, there are other “Rob Schneiders” out there.

This wasn’t completely engrossing, but interesting.

“Codename “M”: Language, Data and Modeling, Oh My!” – Amanda Laucher and Don Box

This was a very lively session. Uncharacteristically of me, it took a few minutes for me to warm up to it. I was really interested in this technology and wanted to dig in. But, there were a lot of big names in the room and there was a lot of ego based bantering. I had to change my mindset to appreciate it. (I just wanted the info)

My understanding of OSLO appears to have been 100% wrong. I won’t make it worse by telling you what I thought.

The M language (modeling) is, I guess, just a tiny piece of the puzzle. It allows you to write your domain using any type of text format you like. M is is a language that will allow you to translate the text format to a tree structure which will converted to dynamic code using the DLR.

The program to do this in is called INTELLIPAD. They call it a 3 pane program, but have since added pane #4, but didn’t change the name. So, the 3 panes are:

  1. The test data. This is where you put the text you want to parse.
  2. The M program to do the translation
  3. The output of the translation as a tree
  4. The output window which shows any errors, etc

Once you’ve written the M program, you can perform a transformation in C#. The resulting object is of type DYNAMIC. You can code against the dynamic, although you won’t have any intellisense to help you.

This was an exciting demo. Its a very cool product. You can even debug in intellipad as it parses it.

This will be a huge help when writing DLSs. You no longer have to parse it out yourself. Of course, you have to learn a new language, but that’s what we do.

“A Skeptical View of Language Workbenches” – Glenn Vanderburg

Glenn started off toning down the “skeptical” part of the title. He’s not as negative about the concept as the title may lead you to believe.

Following Don Box and friends, this was a lot more toned down. He was very succinct. For bonus points, he played a scene from Serenity. Awesome.

I did end up zoning out a bit. I think his point was, though, that he’s not convinced the DSL workbenches are addressing the root problems of DSL. But, he seems encouraged by the direction they’re going.

“Intentional Software” – Magnus Christerson

There was a lot of excitement around Intentional Software. I was told that someone said something to the effect of “Microsoft is in the 18th Century compared to Intentional”.

Intentional Software’s Product is a collaboration tool for the domain expert and the programmer. It allows you to document the domain and convert that domain to rules and code. It then allows you to project (transform) that output as you need it. In the example we saw, the projection was to Ruby code.

Apparently this is the silver bullet of all of your DSL needs. I can’t argue… I don’t know enough about it. It looks big and it looks complex. By their own admission, it takes a developer user a few months to get up to speed. Its a new skill. Skills take time.

This was my first exposure to the product. I didn’t have enough background on it to fully appreciate it, but heck, it looked neat to me.

QCON – Conference Recap – Thursday

November 23, 2009

Keynote: Data and Programs: Rethinking the Fundamentals

This was a pretty big disaster. Don Box had a bad day. As far as I’m concerned, that’s all it was.

The feedback system was GREEN, YELLOW and RED cards that you deposit on the way out. There were lots of reds. I declined participation; he knew it bombed. He didn’t need us to tell him.

“Patterns for Cloud Computing” – Simon Guest

This was excellent. It made me want to run out and start using Windows Azure.

Simon Guest is a Microsoft guy, but I found the presentation to be well balanced regardless. Of course, he showed off Azure. But, he also spoke quite a bit about Google and about Amazon. He didn’t bash them or even compare them positively or negatively to Azure. He just talked about them and what they do.

In one demo, he submitted a request to the cloud. The page came back without the row because it hadn’t committed yet. A second later, the grid regenerated and the row was there. This brought me back to the WEBSOCKETS talk… whatever polling or refreshing that was doing can be eliminated when the websocket is implemented. (See Wednesday – HTML 5)

This covered the basics of Azure, which I already knew from some reading: submit a request; put the request on a queue; process the queue with one or more workers. (This is how a product we built at work does things too).

The thing that surprised me about it is that you control the scaling. You tell it how many workers to use. If you get more load, you can increase the number of workers. I figured you’d be able to apply a policy or something to control this so that it can automatically scale up as it needs to within boundaries that you define. (Maybe it does do that, but that’s not what we saw).

Another demo was “find all of the prime numbers between 1 and x”. He went all out with that one… he distributed the calculations across Azure, Google and Amazon. The neatest part is that it reported it took a total of 6 CPU seconds across all of the machines. Neat.

He also touched on SQL SERVICES for AZURE, which weren’t originally there. Its not very cloudish, but there was a lot of demand for it, so they provided it. You can setup a 1 gig database or a 10 gig database. All databases are replicated across at least 3 servers.

Despite that its not very cloudish, it seems that will make Azure more practical for a lot more existing applications.

“Mapping Relational Data Model Patterns to the App Engine Data Store” – Max Ross, Google

Max was brutally honest about the capabilities of the Google app engine to the extent that he talked a lot about things that it can’t do. I ran through the app engine startup tutorials probably a year ago, but that was about it. I couldn’t relate to the discussion, though I can say it was lively. People in the room had good questions.

“Architecting for the Cloud: Horizontal Scaleability via Transient, Shardable, Share-Nothing Resources” – Adam Wiggins

Another great talk that I couldn’t relate to. HEROKU is a cloud platform for Ruby applications. I’m looking at the website now, and it has “40,653 apps running right now!”.

Unfortunately, I don’t have anything else to say about this one. I’m note a ruby guy, but if I were, HEROKU would be the place to go.

“Agile Development to Agile Operations” – Stuart Charlton

Disclaimer: I’m not sure if I’m remembering this accurately or confusing it with something else. Feel free to disregard. (I didn’t take notes, and my memory is fuzzy))

This talk was about the changes to infrastructure when using a cloud based system.

Without a cloud, you need enough servers to cover all of your peaks. With a cloud, you only need to scale up when necessary. Only use the servers you need when you need them.

Amazon’s cloud solution is good for this. He gave an example of when someone had to convert millions of TIFFs to some other format (PDF, perhaps?). They were able to do it in a weekend using Amazon’s services. Rather than having to obtain all of the hardware and set it up, they used Amazon, and got it all done in an absurdly small amount of time.

QCON – Conference Recap – Wednesday

November 23, 2009

Wednesday, 11/18/2009

My intent was to take the entire architecture tract. But, that didn’t work out.

Keynote Speakers – Salil Deshpande and Kevin Ufrusy

These guys are venture capitalists. They’ve invested in a lot of technologies that I heard of at the time of the key not, or heard of by the time the conference concluded.

“Patterns in Architecture” – Joseph Yoder

This started off very interesting. Joseph is a great speaker. But, the room was too crowded. I had my back against the coffee table, and was surrounded on all sides. They brought in a few more chairs, but it didn’t help my plight. I left after 10 minutes. I wandered around to see what else was going on, but all of the doors were closed and I didn’t want to disrupt anything. So, I spent the remainder of the hour reading my shiny new .NET REST book

“Google Chrome Frame” – Alex Russell, Google

Fortunately, this was supposed to be the first session of the BROWSER AS A PLATFORM track, but was switched due to some type of conflict. I wasn’t interested in the talk that was supposed to happen at this time, but was later told that it too was excellent.

This was a great overview of a thing called “Google Chrome Frame”.

IE6 is currently the browser that everyone wants to go away. It represents an old generation of browsers. But, the fear is that its not going anywhere any time soon.

GCF is meant to bridge the gap between the old generations of browsers and the new ones. For GCF enabled sites, the IE rendering engine is bypassed, and the chrome engine is used instead. This gives you the benefit of coding using modern standards while using an old browser (although the old browser becomes nothing more than a shell for a new competing browser, oddly enough).

Alex reiterated that their preferences it that you not use GCF; the preferred plan is that you upgrade your browser. But, if you can’t, then GCF will help prolong IE6 life.

GCF is currently enabled through a meta tag in the page being served. By the time it ships, a header can be used instead. The meta tag must be in the first k of data.

There’s a javascript file, CFInstall, that you can reference on Google. The javascript will run the user through the GCF install. It provides some options to tailor the look and feel.

This is a neat product, although I personally won’t ever need it.

“Adventures of an Agile Architect” – Dan North

Dan is a funny speaker. I really enjoyed this session.

The big take-away from this was that you need a shaman. Much like ancient cultures, the history of a product or technology is best told through stories. “Why was this done this way?”  “Gather around, gather around… let me tell you the story of…. blah blah blah”. It probably doesn’t translate into text very well, but was funny in person.

HTML5 WebSockets – John Fallows, Kaazing

This was great.

HTML5 introduces a full duplex websocket between the browser and the server. That’s huge. Now, your web browser can connect to the server and stay connected. It won’t have to send requests to get updated information; when there’s update information, the server will just send it to you. No more polling.

This is part of the HTML5 spec. Of course, for it to be useful, the browsers and servers have to support it. They don’t yet. That’s where KAAZING jumps in. They provide a javascript client, and a server gateway to give you this capability now.

CHROME 4 BETA, released last week, has it built in. They gave a demo in CHROME 4 showing multiple parts of the page being continuously updated.

WebSockets, in my opinion, is a huge advance. Once all of the clients and servers have it build in, though, what is KAAZING going to do?

Evolving the key/value Programming Model to a higher level – Billy Newport

I was 2 minutes late for this (unusual for me), but felt like I missed an hour. He was really into it.

I found this talk particularly interesting because its directly related to some tuff going on at work. I really doubt they would ever go this route, but it is relevant info.

Personally, I’ve burned absurd amounts of calories pondering and playing with meta driven solutions to a problem that I may or may not have. When I first saw the API in this demo, my first ignorant impression was “no big deal”. But, then I came to realizes that the API, though simple, is spreading the work across a grid of machines. That makes it a huge deal.

Billy mentioned REDIS more than once. I didn’t know what that was, but now I have an idea of it. If I heard the talk again, it would probably be more enlightening.

This is part of the “get rid of databases” family of things to do. Rather than store relational data, you store all data as name value pairs. You can also build up memory lists of things that you need. IE: Last 10 users to sign up. Then, when you create a new user, you push it onto the stack and the other one falls off.

This is an exciting topic that I need to learn more about.

Keynote: The State and Future of Javascript – Douglas Crawford

Hands down, the best key note of the conferences.

Douglas Crawford knows a thing or two about javascript.

Javascript has been stagnant since about 1999. This is because the body of organizations responsible for improving it developed some internal strife. When Douglas joined the group, he didn’t agree with the direction version 4 was going, and he objected. He met with other companies behind closed doors and found that they objected as well, but were to shy (or concerned about anti-trust lawsuits) to object. So, they backed him from the shadows.

Over time, it became two distinct groups in the same body. The initial group wanted to release version 4. Douglas proposed that they scale it back to just the things that make sense, and call it 3.1.

Douglas describes all of the key events in vivid entertaining detail. As geeky as it sounds, I think it would make a great movie. Heck, if a documentary on Donkey Kong can be succesfull, why not one on Javascript? (Incidentally, its really called ECMA script, not javascript).

Cross-Node pubsub is working

November 9, 2009
  1. Start the broker
  2. Start Node 1
  3. Start node 2
  4. Start the admin app

In the admin app

  1. Open a publisher on node 1
  2. Open 2 subscribers on node 1
  3. Open a publisher on node 2
  4. Open 2 subscribers on node 2

If you publish from either publisher, all subscribers receive the message.

The publisher and subscriber windows were associated to the main admin form with a hardcoded url to node 1. I removed that and moved them to the the node admin form.

Now that all of the moving parts are in and working, it’s time to put a lot of effort into it and make it solid. Currently, its little more than just a prototype.

I wish I had more time to work on this. Its slow going. But, I know where my efforts will be for the immediate future.