Beyond the API: The Event Driven Internet


Summary: There's no question that APIs are hot and generating a lot of buzz and excitement. In this article, I'll review why APIs are causing so much excitement, make an argument for why APIs are not enough, and finally propose a model that significantly extends the power of an API: an event-driven view of the Internet. Extending your API with events will make your APIs much more able to compete and make your business more competitive. After reviewing event models, I discuss webhooks as an event model that complements an API strategy and then briefly talk about how Kynetx extends the webhook idea into something that is truly powerful. Using Kynetx, you can give your API an instant developer program. Let Kynetx help you with your API strategy.

APIs are Hot

A recent article at Forbes by Dan Woods talked about a "multi-dimensional gold rush [happening on the Internet]--with APIs at the center."

To understand the business value of something as technical sounding as an "application programming interface"--the almost never-used expansion of the acronym API--look at several powerful forces that are converging to make a programming tool for developers an engine for sales, marketing and customer lock in. APIs are rapidly going to be vitally important for every business, not just the Silicon Valley technology giants.

APIs allow developers to create new applications that incorporate the service underneath the API. For example on eBay, a programmer can user the API to create a gateway to move all the items in a company's catalog onto the auction site. But it works the other way as well. Items that are being auctioned off on eBay or that are for sale on Amazon can find their way to new audiences when programmers use APIs to include the product listings in their own applications.

...

If this sounds a bit obscure, think of it this way: Google and Facebook have 5 billion API calls a day, according to John Musser, editor of ProgrammableWeb, the leading publication covering internet-accessible APIs. Twitter has 3 billion calls a day that amount to 75% of its traffic. Salesforce.com has 50% of its traffic flowing through APIs. Don't think APIs, think billions of dollars of money sloshing through obscure programming methods.

Fred Wilson, a partner in venture capital firm Union Square Ventures, put it this way in his note about investment strategy at the beginning of 2010: "Developers are the new power users. If you cater to them, you can build a large user base with significant network effects." To cater to developers you must offer them an API to play with.

From What's Fueling The API Gold Rush - Forbes.com
Referenced Wed Sep 15 2010 16:23:02 GMT-0600 (MDT)

I can't help repeating Fred's comment "Developers are the new power users. If you cater to them, you can build a large user base with significant network effects."

If this is still not resonating with you, spend a little time with Sam Ramji's talk on Darwin's Finches, 20th Century Business, and APIs. Here's the summary from Sam's blog:

There is a perspective some people apply to evolution, social theory, and language change called punctuated equilibrium (credit goes to Jess Ruefli for pointing this out). It suggests that change is not gradual, but that change comes in sudden punctuated bursts between stretches of relative stasis or equilibrium. The Web from 1995-2000 was certainly a surge like this as every business "went online" in order to continue to function in a newly competitive economy. I believe that we're going through such a surge right now as the early versions of the web - designed for people using browsers - gives way to the next version: using APIs to design the web for people using applications that communicate on their behalf in complex ways to the services that make up the world's businesses. If we look to evolution and to the last similar shift - the move from direct to indirect channels for business in the 20th century, we can apply old lessons to this new world in order to succeed.

The primary point Sam makes is that as the offline world went from direct to indirect selling in the mid-20th century, successful business were those that sold through successful retailers. So too, the Web is going indirect. Building a Web site for people to visit with their browser is "direct." Building an API where other Web sites can use your data to succeed is "indirect." Good APIs take your business with them all over the Web as they get used. But for that to happen your API has to appeal to the businesses that will use it. You have to make them successful for your business to succeed. Sam gives some principles that help you help your partners succeed:

  1. Realize that developers are your channel
  2. Be recombinant and easily mixed
  3. Unlock your legacy data into open APIs
  4. Drive new data into your system via open APIs
  5. Support your application ecosystem

When you hear people clamouring about cloud computing, you'll see a lot of smoke around infrastructure cloud providers like Amazon, Google, and Rackspace, but that's not where the fire (energy) is. The energy, and the real money, will be made in APIs. That's what has Dan Wood so excited in the Forbes article I reference above.

Beyond the API: Events

I'm a huge fan of APIs. I thnk they change the game and will open up numerous new services on the Internet. But with all the excitement over APIs, I think they only get us part way to where we want to be. An API can only respond when it receives a request. Many interesting services will also need to make requests of their own. That pattern is broadly called an event architecture.

Events, and the need for them, isn't an idea that's surprising to most people who've worked with computers. We've all contemplated the uses of interrupts in computing systems to avoid the need for constant polling by one device or system of another. If an I/O device isn't able to interrupt the CPU, the CPU has to constantly check with the I/O device to determine if it's got something new.

Moving up the stack, a bit, one of the disadvantages of RSS is that there's no standard way for an RSS feed to notify interested parties when it's got a new item. The only way to get new items is to constantly poll the system hosting the RSS feed to see if it's changed. Pubsubhubbub is a protocol that rectifies that by defining a way for systems to register their interest in a particular feed. When the feed publishes a new item, subscribers are notified. More generally, an event is raised telling subscribers there are new items.

But the need for event-driven architctures in combination with APIs goes beyond the standard "polling vs. interrupts" argument. Most of us associate that too closely with the "I've got something for you" kind of occurance. This feels too much like message passing and implies a tighter coupling than is necessary or desirable for most event systems. In general, events can be associated with several types of occurences

  • A specified action is taken (e.g. a new item is published to a feed)
  • A spontaneous act of nature too complicated to be fully understood is detected (e.g. your computer just crashed)
  • One or more conditions are determined to have been met (e.g. the temperature of the oven reaches 450 degrees)

In these examples the system raising the event has no knowledge or understanding of the downstream systems that are seeing the event or what they might do. Event architectures can be characterized by extreme loose coupling.

Event Systems

Event systems can be characterized as "simple" or "complex." Simple event systems look for and react to one kind of event. In contrast, complex event systems can monitor scenarios made up of more than one event, reacting only after certain patterns of events have been detected. The events in such a scenario may come from multiple event domains and be correlated in space, time, or causality. Responding to complex event scenarios requires sophisticated event interpretation, event pattern definition notations and matching engines, and event correlation techniques.

Event systems have several components:

  • event generators - the event generator raises the initial event. There may be translation steps to get the event into the right format for use by the event processor.
  • channels - this is the messaging protocol that's used to transfer the event from the generator to the processor. The channel can take many forms and a given event system might support multiple protocols such as HTTP, XMPP, SMTP, and so on.
  • event engine - event engines match event patterns and initiate action. Simple event engines respond to each event seperately and are sometimes refered to as "handlers" or "listeners." In complex event scenarios, the event engine processes the events and only initiates action when the simple events in the scenario match the required correlation pattern.
  • responders - responders take directives from the event engine and take desired actions. A given event scenario match in the engine may result in zero or more activities on a variety of responders.

The overall power of a given event system is proportional to the flexibility it has in regards to its support for the number and type of generators, the variety of channels, the complexity of the acceptable event scenarios, and the number and types of activities it can initiate. Of course, the greater the power of the event system the more complicated it can be to configure.

The loose coupling properties of event systems follow from the fact that the event scenarios and follow-on activities are defined according to the needs and desires of the interested parties, not the organizations and systems generating the events. Once the event has been generated, anyone who sees it chooses to respond however they like.

Webhooks

The webhook concept popularized by Jeff Lindsay is an example of a simple event system built on top of the existing Web infrastructure.

  • generators - in the webhooks model, anything that can call a URL can be a generator and events are raised by performing an HTTP method on a URL
  • channel - the channel in webhooks is the HTTP protocol
  • event engine and responders - the engine and responder are the web application that the URL points to.

Webhooks overlay an event model on the Web. There's no "system" per se, just a usage pattern for enabling user-defined callbacks on the Web. Jeff points our that webhooks can be used for everything from simple notifications to more sophisticated service chaining and even Web application plugins.

There are already a number of Web applications that support Web hooks (some, I'm sure, without really knowing they were doing so--true of all great, natural patterns). Perhaps the most familar is Paypal's instant payment notification. The idea is simple. Paypal lets users provide a URL that Paypal calls whenever any of a number of transactions are made. As the developer of the application at the other end of the URL, you can do whatever you like when Paypal calls your listener (i.e. the CGI program you write) using the URL you supply. Your program can ignore the Paypal data, store it in a database, or whatever.

Paypal IPN

This is an example of a simple notification webhook. Amazon payments has a merchant callback API that functions as a webhook plugin. When Amazon gets an order on your behalf, they will call the webhook you give them. They expect to receive the taxes and shipping cost. You return those and they put them in the checkout for your customer. Again, your listener can do anything you like when the callback URL (webhook) is called as long as you return the right data. Amazon is using webhooks to create a flexible plugin architecture for their service.

Twilio is a cloud-based telephony platform that uses webhooks to do service chaining. For example, when you use Twilio to place a call, you can give it a webhook that it calls when the call is answered. You return the XML payload that defines what happens next. Twilio will then call a webhook for the next action. Twilio and the webhooks it calls form chain of executions that achieve the ultimate purpose.

Webhooks provide a useful pattern for extending APIs beyond the traditional interactions that have defined them. Webhooks are easy to implement and very flexible because they require no special tooling or software to make work. Anything application that speaks HTTP can generate or process webhook-style events.

Beyond Webhooks: Kynetx

Webhooks have several limitations:

  • webhooks lack of a formal framework means that programmers are responsible for managing event scenarios. For simple listeners this isn't a problem, but as event scenarios become more complicated, this places a large burden on the developer.
  • webhooks are largely about the web (go figure). This means that interacting with event generators or responders in other domains (e.g. email) requires the creation of a gateway between that taarget domain and the Web. This isn't a big deal except that the lack of standards around how things should work make re-use of these gateways difficult.
  • webhooks don't supply any special facility for managing user identity. Each service defines it's own method for managing identity.

A more formal event framework can overcome these problems by defining notation for the engine and standards around interactions. Kynetx has developed an event service for the Internet that provides a more formal structure. The Kynetx Network Service (KNS) maps onto the event model given above as follows:

  • generators - in KNS programs called endpoints raise events using an agreed upon format. Endpoints can use the KNS APIs to determine which events are salient to limit communication traffic.
  • channel generators communicate with the engine over HTTP. Protocol translators can be used to accomodate other protocols.
  • event engine the Kynetx Rule Engine (KRE) is the event processor. KRE supports complicated event scenarios. Handlers are scripted using the Kynetx Rule Language (KRL), a domain specific language for Internet events.
  • responders - endpoints are also responsible for responding to directives from KRE and taking appropriate action.

Endpoints serve two functions in this architecture, but there's no need for any given endpoint to do both, although most do. Endpoints that merely raise events or just respond to directives are supported.

KNS Overview

The Kynetx Rule Language provides a unifying notation for reinforcing the conceptual event framework implemented in KNS as well as providing all the traditional benefits of a notation. In particular, KRL provides a convenient notation for specifying complex event scenarios. KNS automatically builds state machines that correlate multiple events across event domains using that event scenario specification. As each new event is received, KRE evaluates it in the context of past events and the event specification to determine when activity should be initated.

As mentioned, KRL provides a domain specific language for specifying event handlers. Each rule in a ruleset has an associated event specification that determines whether the rule is selected or not. Whenever a rule is selected, the rule body is evaluated. The ultimate effect of this evaluation is a set of directives that inform endpoints of the actions they should take. Along the way data sources and persistent data about the entity raising the event are consulted and used in calculations to compute the appropriate set of directives.

For more details on how Kynetx works, check out our free white paper: The Kynetx Rule Language - The First Internet Application Platform.

It's All About the Individual

One of the key features of KNS that makes it uniquely built for programming interactions on the Internet is that it's got identity built-in. Every event is raised on behalf of a particular entity1. Even if you and I have the same apps installed (and thus are using the same rulesets), the behavior you see might be radiacally different than the behavior I see based on our context.

Endpoints, events, and directives

This contrasts to other business rule languages where the process is the fulcrum around which the ruleset executes. KRL primitives understand the context of the individual and take it into account when they execute. Even persistent variables store their values on behalf of a specific entity.

By putting entities--the individual--at the center of KNS, we have created an event system that is aimed squarely at creating apps that are user-centric. This is why Kynetx has been so interested in the personal data store conversations, because it is a natural way to script the interactions that individuals have with various services around the Internet.

KRL and KNS Benefits

As we've seen KNS and KRL are unique in the use of events as a unifying abstraction for creating Internet apps. They are also unique in their focus on the individual and support of user-centric applications. In addition to these key differences, KNS and KRL provide a number of important benefits in creating a event-driven Web:

  • Cross domain--Kynetx apps can work across domains so that user purpose can be advanced regardless of online location. KRL is designed to cross the silos that have sprung up, as standalone Web applications, so developers can create applications that mash-up data from all across the Internet regardless of location or protocol.
  • Cross protocol--Kynetx apps easily work across Internet protocols such as the Web, email, and so on. KNS is easily extensible by developers to support any protocol.
  • Data and context driven--KRL and KNS are designed to easily and naturally work with the burgeoning array of data and APIs available online. Correlated data provides context about users. Using KRL and KNS, developers can create applications that respond to user context for a more compelling experience.
  • Cloud based--because Kynetx apps are cloud based, they work consistently and ubiquitously. They can be accessed from multiple platforms while providing the same context, identity, and experience. Cloud based programming means that programs always work because they are updated without user interaction in response to changing conditions.
  • Browser independent--Kynetx apps work in all the major browsers without modification. The browser has become a sort of universal application platform, but browser differences make programming on them difficult. KRL provides a unifying framework for easily working with all of the popular browsers.
  • Internet app centric language and design--KRL provides a powerful notation for creating apps that run across the Internet. KNS provides the platform that makes that possible.
  • Security and privacy are built-in--the architecture of KNS is designed to limit nefarious activity structurally. In addition, operating in the cloud makes it easy to turn off apps that are misbehaving. User control provides the means to create privacy respecting apps.
  • Late binding--Kynetx apps run at the exact moment that the user needs them. They bind to data and functionality that is appropriate for the user's current context. In contrast, conventional Web applications exist at a single location and operate without the benefit of user context.
  • Multi-endpoint--KNS provides application program endpoints that work with Web browsers, email servers, and other Internet systems. Kynetx plans to provide endpoints for popular and important Internet protocols and applications as part of its ongoing development roadmap. Developers can easily extend KNS to include endpoints for any Internet protocol.
  • Developer friendly--KRL is designed to provide developers with a powerful and easy to use abstraction layer for apps. KRL provides a notation that lets programmers easily complete Internet programming tasks that previously took many lines of code. Event expressions, datasets, and data sources are just a few examples. Because Kynetx apps are hosted, developers are spared operational and maintenance headaches that come with servers.

Using KNS as part of your API Strategy

As I pointed out at the beginning, there's a lot of energy surrounding APIs right now. If you're building a company and considering an API, I'd ask you to consider going beyond the simple API. Consider how an event strategy can compliment your API to provide even greater functionality. We'd be happy to consult on that.

If you do decide to incorporate events into your API strategy, there are two ways to use Kynetx to make it even more snappy:

  1. Implement a straight webhook strategy of Web callbacks and a defined interaction protocol
  2. Build a Kynetx endpoint into your product so that raising events that Kynetx understands--and taking actions based on those event--is easy for your developers.

Personally, I think you ought do both because it's relatively easy. If you go with a straight webhook implementation, we supply a webhook translation service so that your webhooks can be used by Kynetx developers. Using Kynetx as part of your API strategy gives you an instant developer program and makes your API available to Kynetx developers as well. We're happy to help you develop your strategy and work out how Kynetx can help you make your API program a success. Just let us know.

Footnotes:

  1. Actually, you can raise events without specifying an entity, but this is usually done for reasons of configuring an application or some general system. Think of entity-less events as similar to class variables in an object-oriented language.

Please leave comments using the Hypothes.is sidebar.

Last modified: Thu Oct 10 12:47:19 2019.