<?xml version="1.0" encoding="UTF-8"?>
<rss version='2.0' xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Randy Shoup</title>
    <description>CTO, father, iconoclast</description>
    <link>https://randyshoup.silvrback.com/feed</link>
    <atom:link href="https://randyshoup.silvrback.com/feed" rel="self" type="application/rss+xml"/>
    <category domain="randyshoup.silvrback.com">Content Management/Blog</category>
    <language>en-us</language>
      <pubDate>Thu, 16 Oct 2014 00:21:44 -0500</pubDate>
    <managingEditor>rshoup@pacbell.net (Randy Shoup)</managingEditor>
      <item>
        <guid>http://randyshoup.com/evolutionary-architecture#8938</guid>
          <pubDate>Thu, 16 Oct 2014 00:21:44 -0500</pubDate>
        <link>http://randyshoup.com/evolutionary-architecture</link>
        <title>Evolutionary Architecture</title>
        <description>Good Enough is Good Enough</description>
        <content:encoded><![CDATA[<p>At the recent <a href="http://gotocon.com/cph-2014/speaker/Randy+Shoup">GOTO conferences in Copenhagen and Aarhus</a>, I had the opportunity to have an extended set of conversations with <a href="http://martinfowler.com/">Martin Fowler</a> about an idea he has been turning over recently in his head — “sacrificial architectures”.  Every technology architecture is of necessity temporary, and we should both get comfortable with, and take advantage of, that reality.  Coincidentally, I’d also been thinking along similar lines --  that it is far more a privilege than a burden to get to rewrite a system you have outgrown. So I was inspired to write down a few thoughts about evolutionary architecture.</p>

<h2 id="why-are-architectures-temporary">Why Are Architectures Temporary?</h2>

<p>There is no one perfect architecture for all products and all scales.  Any architecture meets a particular set of goals or range of requirements (functionality, scale, etc.), within a particular set of constraints or context.</p>

<p>The functionality of your product or service will almost certainly evolve over time.  It should not be surprising that your architecture should as well.</p>

<p>Your scale changes, hopefully up and to the right!  What works at scale X rarely works at scale 10X or 100X.</p>

<p>Finally, the longer you keep doing something, the more deeply you learn about it.  So even if functionality or scale never changes, your future you knows a lot more about your domain than you do now.</p>

<h2 id="the-small-and-the-large">The Small and The Large</h2>

<p>The goal of a small startup is to <a href="http://steveblank.com/2010/01/25/whats-a-startup-first-principles/">prove a business model</a>, within strict constraints around resources and time.  So a startup’s architecture should optimize for cheap, rapid changes to the product.  This means technologies that are familiar to the team and easy to use.  Typically these days this is a monolithic application in a dynamic language like Ruby or PHP over a single monolithic RDBMS.  And while I&#39;m feeling some eBay and Google colleagues shudder, this is absolutely correct.</p>

<p>Even more importantly, being a startup means building for the near term.  There is no guarantee of any future beyond 3/6/12 months, and if you are around, your business is likely to be very different than it is now.  So there is absolutely no reason to build for that future.  Thinking about 2 years ahead is not only not particularly useful, but counterproductive.  Any effort spent on that future comes with a serious opportunity cost — that effort could and should be spent improving your product in the now.</p>

<p>I have advised a number of small startups over the last several years, and I often get asked “could you tell us how eBay and Google do things?”  Sure I can tell you, but you have to promise me up and down that you won’t do it!</p>

<p>By contrast to the startup, the primary goals for a large Internet company are to meet the needs of its (comparatively large) number of current users, make efficient use of its (comparatively large) resources, and maximize the productivity of its (comparatively large) engineering organization.  It’s far more about efficient execution in an established direction than about business model exploration.  And now we need to focus on various flavors of scaling issues — how to scale the organization and the technology to continue being efficient and productive.</p>

<p>eBay and Google, depending on how you count, are each on their fifth entire rewrite of their architecture from top to bottom.  <a href="http://www.slideshare.net/RandyShoup/the-ebay-architecture-striking-a-balance-between-site-stability-feature-velocity-performance-and-cost">eBay&#39;s architecture</a>, for example, evolved from Perl and files (v1, 1995), to C++ and Oracle (v2, 1997), to XSL and Java (v3, 2002), to full-stack Java (v4, 2007), to polyglot micro-services (2013+).  Looking back with 20-20 hindsight, some technology guesses look prescient and others look short-sighted.  But each of those phases used the best (cheapest, fastest, etc.) tool for the job at the time.  The related obvious point is that if eBay had implemented the 1995 equivalent of micro-services out of the gate, we would not even be talking about the company.  v1 would have collapsed of its own weight, and would have taken the company with it.</p>

<p>Typically at this scale, we have divided a monolithic team into smaller focused groups, componentized our monolithic application into something like micro services, and sharded our persistence infrastructure.  We have designed in resilience to failures in machines, networks, data centers, etc.  We have also introduced specialized systems for particular technological niches, like analytics, search, etc.  We have probably found that some of our technological needs are not met well by anything preexisting in the commercial or open-source worlds, and so have built custom systems from the ground up.</p>

<p>I like to think of it this way — it’s not that eBay and Google <em>had</em> to evolve their architectures; they <em>got</em> to evolve their architectures!  It is very much a first-world problem to be growing so fast that you outstrip your current architecture.  It is a rare and wonderful privilege to have to rewrite.  I’m not missing that this is almost always “under the gun”, and pretty stressful, but you genuinely should feel happy that it is even worth it.  People care about your product, it’s straining under the weight of their enthusiasm, and you have the resources to fix it.</p>

<h2 id="conclusions-and-recommendations">Conclusions and Recommendations</h2>

<p>So what can we learn from this?  I will suggest a few concrete lessons:</p>

<h3 id="1-build-for-the-now">1. Build for the “now”</h3>

<p>Build to meet the needs for your near-term time horizon, about which you have reasonable certainty on requirements and priorities.  Depending on where you are in the cycle, this may be a few months, 1-2 years, etc.  </p>

<p>Beyond that horizon, you will likely need to evolve the architecture (if you are lucky!) -- you just don&#39;t know now how or in which direction.  Expect it, accept it, welcome it.  Getting to evolve your architecture is not an indicator of failure; it is an indicator of success.</p>

<h3 id="2-prefer-evolution">2. Prefer evolution</h3>

<p>Once you have met your needs in (1), if you have choices among a number of different technological approaches, prefer the one which gives you the maximum ability to modify / replace / evolve the architecture.  Finance folks call this <a href="http://en.wikipedia.org/wiki/Option_time_value">“option value”</a>, and just as in the markets, it is often worth it to pay now for flexibility in the future.  </p>

<p>In the technology world, maximizing option value is about minimizing the cost of replacing or upgrading parts of the system.  There are two related ways to reduce this cost:  Simplicity and Isolation.  Bounding the complexity of any one component makes that component inexpensive to replace.  Bounding the interaction surface area between components makes the replacement of that component inexpensive for the rest of the system.  Strict component encapsulation, loose coupling, and event-driven / data-driven programming styles are all examples of this.</p>

<p>It won’t be a surprise to anyone that modern programming approaches from <a href="http://agilemanifesto.org/">Agile methodologies</a> to the <a href="http://www.reactivemanifesto.org/">Reactive Manifesto</a> place a premium on these properties.</p>

<h3 id="3-evolve-incrementally">3. Evolve Incrementally</h3>

<p>When you are so lucky that you have to evolve the architecture, do it iteratively and incrementally.  As Martin Fowler likes to say, the only thing a Big Bang rewrite guarantees is a Big Bang!  Instead of replacing everything all at once, choose one end-to-end use case and reimplement that.  You bound your risk, and are guaranteed to learn a lot you did not expect.  Use those learnings to help inform the next step.</p>

<p>Thanks for reading this far.  I have talked about many of these ideas in <a href="http://www.slideshare.net/RandyShoup/goto-aarhus2014-enterprisearchitecturemicroservices">From Monolith to Microservices</a>, so let me know what you think.</p>
]]></content:encoded>
      </item>
      <item>
        <guid>http://randyshoup.com/reinventing-the-wheel#8064</guid>
          <pubDate>Wed, 10 Sep 2014 17:01:33 -0500</pubDate>
        <link>http://randyshoup.com/reinventing-the-wheel</link>
        <title>Reinventing the Wheel</title>
        <description>Five technologies too many organizations reinvent</description>
        <content:encoded><![CDATA[<p>One consistent theme I have seen in 25 years in software has been the regular reinvention of some common technological building blocks.  Time and again, well-meaning developers and organizations spend hard-earned effort building things they really should buy / borrow / steal, paying the opportunity cost of not building more of their product or service.  It&#39;s been alternately surprising and bemusing, and I&#39;ll admit I&#39;ve fallen victim to the temptation more than once as well.  In this post, I&#39;d like to list a few common reinventions and suggest why they recur.</p>

<h2 id="common-reinventions">Common Reinventions</h2>

<p>At almost every organization I&#39;ve worked at, I&#39;ve seen at least one of these reinvented:</p>

<ul>
<li>Logging system</li>
<li>Message queue</li>
<li>O/R mapping layer</li>
<li>ETL </li>
<li>Job scheduler / work queue</li>
</ul>

<p>It&#39;s a safe bet that if your organization is larger than a certain size, or has been around longer than a bit, you may have (re)built one of these systems as well.  Almost certainly, you can think of others as well.</p>

<h2 id="why-reinvent-these-wheels">Why Reinvent These Wheels?</h2>

<p>So what is going on here?  It&#39;s not like we don&#39;t have multiple solid open source and commercial implementations of all of these.  I think there are a few common factors at work:</p>

<ul>
<li>Requirements seem simple.  Almost every developer can outline in a straightforward way what one of these systems would do.</li>
<li>Implementation seems simple.  Nothing here seems like rocket science.  A logging system just has to get log messages from online systems into offline storage.  How difficult could that be?</li>
<li>Each seems like a fun and interesting, yet tractable, challenge.  Why should we use a &quot;heavyweight&quot; off-the-shelf system, when all we need (seemingly) is a small subset of what product X does.  Let&#39;s just build it ourselves.</li>
</ul>

<h2 id="deceptive-simplicity-and-the-80-20-rule">Deceptive Simplicity and the 80/20 Rule</h2>

<p>The truth is that it&#39;s actually not that hard to get the basics going.  But the simplicity is deceptive.</p>

<p>There is an 80/20 rule at play.  While a competent developer can knock something out in a few hours or days that covers the first 80%, correctly solving the remaining 20% of the problem takes an order of magnitude more effort than it appeared at first blush.  In a logging system or message queue, for example, you rapidly run up against fun distributed systems problems around node failures, network failures, duplication and ordering, etc.  We need to make the storage fast, efficient, and reliable.  Ditto the network transport.  The devil is in the details.</p>

<h2 id="focus-and-tradeoffs">Focus and Tradeoffs</h2>

<p>But even if we believed we could build a simpler, proprietary logging system or ETL pipeline, should we?  I&#39;m all for building where it makes sense, but let&#39;s not lose sight of the tradeoff here -- in the form of opportunity cost.  Fortunately these are all solved problems, and have been for a long time. And they are not typically our organization&#39;s core business.  So they are, to borrow the Jeff Bezos phrase slightly out of context, <a href="http://www.oreillynet.com/network/2006/12/20/web-20-bezos.html">&quot;undifferentiated heavy lifting&quot;</a>.  </p>

<p>What would we be building if we did not invest effort into one of these?  It might be fun, but I&#39;ll submit that we probably have better uses of our limited time and resources.  </p>

<p>What do you think?</p>
]]></content:encoded>
      </item>
  </channel>
</rss>