I guess I ask too often "what is it really",
When you peel back the abstraction layers, what are the dependencies and the building blocks? At some point you are building on top of something else that I am more familiar with (or that has a name that you didn't make up yourself).
How much of the system are you building from scratch and how much behavior comes from the underlying system? There are a surprising number of tools that are a simple integration layer on top of something else that's more complex. If you depend on one of these things, be prepared to have an unexpected need for understanding and debugging when the underlying complex system needs attention.
What unseen assumptions do you have about speed and latency and performance? Two systems will superficially do the same thing. If one of them came from a high-end data center model of the world, and the other came from a world of Raspberry Pi systems in basements and attics and utility closets, they will behave very differently under adverse conditions.
I am a personal fan of "robust in the face of adverse conditions", but that's not always the same as "as fast as possible to handle a global workload even if it means that it can operate only in high-end data centers".
Comments
You can follow this conversation by subscribing to the comment feed for this post.