Twitter Vs. The API Economy

Twitter_API_Economy

Last week, social media giant Twitter announced that it would be suspending access to its “firehose” data feed API, a blow for companies such as Datasift using the raw information of all posted statuses for data mining, analytics, and insights.

In some sense, controversy can’t be avoided with so many consumers and businesses connected to each other through Twitter. Everything from proud moms to public service announcements, entertainment professionals to train schedules, flows through as tweets. Naturally, when Twitter decides to move peoples’ cheese, there’s bound to be some bad feelings. Datasift isn’t phased, they say their show will go on. But like the Meerkat incident at SXSW this year, Twitter’s approach to its ecosystem is coming under extreme fire by those in the API economy.

Can APIs Change with the Times? 

An API is a contract, a way for things to be connected together reliably and ubiquitously. As such, versioning is a key concept that all API publishers at some point have to reconcile with the need to publish updates. If my changes break your apps though, isn’t that against the very nature of a contract?

In theory, the answer is “no, change is fine”, the API economy should expect a certain amount of change to any system, provided mitigating actions come with it. In practice however, business drivers and developer time represent enough of a disincentive to let APIs grow stale to support existing clients that we often see API providers simply write a new milestone API to do the same thing the older API did, but slightly better. The old might be right next to the new, but fundamentally you’re asking (not demanding) that developers migrate to the new version.

Twitter’s move to forcibly shut off the firehose data stream is a demand that their ecosystem change immediately, not the sentiment that the API economy likes to hear from such a kingpin. However, we API industrialists are not averse to change and can deal. One of the ways we know that what we come up with is equivalent to what we had in place is proper integration testing.

API Virtualization: Rapid Prototyping and Integration Testing 

When you are rewriting an API or writing a new version to deal with problems in an old API, you need control. You need to be able to rapidly prototype, develop, and test within a short timeframe, and you need to be able to reuse whatever you have already built to your advantage. You also need to make sure that the functionality, the performance, and the data compatibility of the new API version meet (if not exceeds) the expectations of the old version.

So consider the architect developer faced with the awesome responsibility of making sure that new code written against Twitter’s “decahose”, a smaller sampling of the full raw firehose data feed, still works as expected and produces results that are consistent with the data requirements of his clients. Instead of writing two versions of the code (the existing old way, and one to compare the new way), a virtual API based on the old target API can stand in during development as a single point where the

developer can receive incoming transactions from clients and transform them to the new API seamlessly, only where data transformation is required. Live “data shaping” capabilities are an indispensable tool in the API prototyper’s toolbelt.

Likewise, when looking to validate the performance of new code, the new target API may not take kindly to being load tested. You might find that you hit your Twitter rate limit in the first few minutes of a test, impeding your entire team from working on the project through the rest of the day. In this case, you would use a virtual API instead of the actual target service during load tests against your own code, surgically removing the 3rd party calls as needed in order to understand the performance characteristics of your new changes.

Integration testing is the art of making sure that dependencies continue to function once the new component is in place. In API terms, you need to be able to A/B test the quality of transactions going to your new service against those going to your old service. For this we can use virtual APIs as a proxy between your client and either your new service or the old service, most often in a controlled ratio of live transactions being routed to the old service and a small sample of others being routed to the new service.

Twitter + Virtual APIs = Developer Lifeline 

Changes to APIs are bound to happen, whether it’s you making them or you being the unhappy recipient, you have to bake in the expectation that things evolve over time. Your apps will need updating, your integrations will need to be re-worked, and you’ll need to test it all to make sure your resolutions are sound. Since we often don’t have the luxury of time to do all this in a single sprint, we need tools and processes that automate the busy work out of the equation, leaving us with what is humanly possible.

A developer’s best friends are often the tools they know well, and the same thing goes for. Virtual APIs give you back control over prototyping, functional, performance, and integration testing, and allow you to stay agile and deliver on time. Having the right amount of control over your automated quality strategy is the best approach to making sure 3rd party changes don’t stop you from shipping great software.

Related Links

What is an API Sandbox?

Hardening Your Application Against Failures with Virtual APIs

Webinar: A Journey with Nordstrom, Lessons Learned Implementing API Virtualization

API Virtualization solutions by SmartBear

Speak Your Mind

*