Opportunities in Software Engineering Research for Web API Consumption
Originally published at:http://www.apiful.io/intro/2017/05/30/se-research-opportunities.html
About two years ago, I wrote about researching web APIs in this blog. I outlined how REST (-like) APIs outlived more complex service paradigms around SOAP and WS-*, and that research should embrace the flexibility and simplicity of web APIs. Since then, we have made some grounds in automatically inferring web API specifications, measuring API quality, or checking web API requests against specifications. These efforts have further highlighted concrete challenges that web API consumption poses from a software engineering perspective.
Challenges for software engineering
While some may argue that few REST-specific research challenges remain open from a services-computing perspective, I think there is lots to be done from a software engineering perspective. In a recent talk at the 1st International Workshop on API Usage and Evolution, I talked about some of the challenges:
String-based interface
Web APIs denote a string-based interface, which does not provide compile-time checking support. Consider the below example of a JavaScript request to a web API. The URL, HTTP method, request payload, and returned data are all strings.
Example web API request written in JavaScript
In this and similar cases, typos, an incompatible data structure, or wrong formatting can typically only be detected at runtime. IDEs don't know the specifics of the web API to invoke and lack capabilities to provide feedback on possible errors during development. Our work on statically checking web API requests against specifications and its integration with the Atom editor are one approach to address this challenge. However, further support is needed to check whole existing code-bases, or to assist fixing found errors.
Frequent changes
Web APIs typically undergo frequent changes, often even breaking ones. The existence of services like API Changelog, which notifies developers about changes in the APIs they are using, is evidence of this problem. Indeed, research finds that many applications show errors or even fail in light of web API changes. While automatic code checking facilities, as described above, can help to mitigate this problem, they rely on the existence of up-to-date web API specifications - which often enough are not available.
Third party control
Web APIs are often controlled by third parties. This may not only be true for public APIs, but also for ones within an organization, as microservice architectures promote dedicated teams to control API-exposed services. This means that your application or service using a web API is dependent on the decisions of the API provider. If the provider chooses to introduce breaking changes or even discontinue an API, consumers are left on their own. Consider the discussions about Instagram's breaking API changes one year ago as an example of the severe impacts that this reliance can have.
Varying quality of service
Web API requests are remote calls whose quality, including availability and latency, depends on the network (-connectivity) and API providers. As such, quality can vary significantly, with severe implications for client applications. For example, we found that web APIs denote significantly different qualities depending on the geographic region they are consumed from. The below figure exemplifies this finding.
Latency of web API in different geographic regions
In our recent book, we outline both how to measure web API qualities and also how to mitigate them architecturally.
Over two years into our journey of researching web APIs from a software engineering perspective, it seems to me that the more I learn, the more challenges arise. Web APIs are a fruitful space for software engineering research. The ongoing shift to microservice architectures, new cloud runtime models like serverless functions, or emerging API paradigms like GraphQL or Falcor only add to this observation.