The book of Hydrus

Prologue:

It has been a great three months of both coding and learning. What started out as an abstract idea, has now become a concrete repository of code and functionality. GSoC has been a great journey, and I would encourage every student of Computer Science to take part in it at least once.

I will share here the details of our journey, the problems we faced in these three months, the different things we have accomplished, how the project has evolved from nothing and how the future can be used to improve upon the work we have done so far. Huge thanks to Akshay Dahiya, my fellow GSoCer who also worked along with me on Hydrus and the Hydra-flock demo. He’s awesome at what he does and it wouldn’t have possible to do all of this alone. A shout out to our mentors Lorenzo and Kristian and all others, who were always supportive and extremely helpful.

Please note that this is a comprehensive compilation of the journey of Hydrus, if you want to just understand the different parts of Hydrus and how they work, please read the TLDR; version here.

Introduction:

Before we begin, let me give you a little background about my organisation and the project. I am working under the Python Software Foundation under a sub-org called HYDRA. Our project which is named Hydrus or (Hydra universal server) is based on a very new technology that is still under development and has not yet garnered the following as most other web technologies have. In my first blog post, I spoke about Hydra, a vocabulary proposed by the folks over at the W3C community that aims at automating REST API setup and usage. Hydra is a set of vocabularies that can be used to document REST APIs so that users need not read the additional documentation to understand how the API works(what links do what). You must be thinking “So what’s the big deal with that?”, well simply put, the way that this documentation is defined automates a lot of tasks when it comes to APIs. The biggest problem faced by programmers who use APIs is updating their applications based on changes in the API. For example, suppose you use an API that gives you information about the weather conditions at your location at http://weathernewyork.com/weather/. For some reason, the company that used to serve the data to you decided to expand and started giving weather data for a neighbouring city as well and they changed their endpoint to http://weathernewyork.com/weather/newyork for your location. Now if you had 10 applications using the API, you would have to reprogram all 10 of these to use the new API.

Suppose they also served the data in the old API in a format like:

{
   “Temperature”: “103 F”,
“Rain”: “5 mm”,
“Humidity”: “40%”
}

And now that they have expanded, they change it to something like:

{
    “City”: “New York”
    “Temperature”: “103 F”,
    “Rainfall”: “5 mm”,
    “Humidity”: “40%”,
    “Snow”: “0 mm”,
    “Wind”: “10 mph”
}

All your current running services that rely on the previous format of data would undoubtedly crash and cause some huge losses in both time and resources. Although this may not seem like a big change, the number of lines of code that might be needed to deal with it is very problematic.

This is where Hydra plays a role. If there was a way for us to document the API, i.e, the URLs for all the data that is served, the format of the data, the operations that are allowed on the data, etc. it could potentially reduce the entire workload of programmers and even allow for smart machines that could communicate with each other and perform a large variety of tasks. The Hydra vocabulary does exactly this, and it opens up new possibilities for innovation. Imagine smart clients that could just read the Documentation of the API and decide upon how it must be used. No need to hard code the links and the operations that need to be done on the API to get information anymore. Not only that but imagine generic servers, that don’t need to be programmed/coded and only need the documentation to be able to set up and serve data. What if you could write API Documentation for the Weather API you use and give it to an application that would create a server for you that functions in the same way as the original Weather API. Something like that could have saved the Weather huge amounts of money that they paid a programmer to create the web server for them. Sound’s like a pretty abstract idea, but these are the very two things we have been working on for the past three months.

The Hydra Universal Server(AKA Hydrus) is a generic server that can create REST APIs with little to no programming using just a Hydra API Documentation. Hydrus requires the knowledge of Hydra and minimal programming knowledge in Python to help people set up production ready REST APIs based on Hydra.

There are a lot of components of Hydrus that make all this possible and although they may seem easy to understand, the amount of thought that has been put into creating Hydrus is a lot.

Chapter 1: The design that made everything possible.

How will it work?

Let us start by looking at what Hydrus is supposed to do and then how we may go about doing it. We will make decisions as we progress about how the design should be turned into a working product. Please note that Hydrus is a project that has been built from scratch and with minimal requirements so that anyone is able to use it and benefit from what we have done. That being said, the design may not be the best one and may have several flaws, but I am sure that in the coming future, we will get a lot of support and many brilliant minds working on improving it.

At the most abstract level, the things Hydrus does can be summarised in the following diagram:

flo1

For those of you who have previously worked on Linked Data and REST APIs the ideas of Classes, Properties and Collections may be familiar. Like any data that needs to be organised, data in a REST API, especially those that are based on Linked Data, data is defined/described using Classes and Properties. If you are unfamiliar with the concept of Linked Data, there is a basic tutorial here to help you understand. In simple words, Linked Data classes and properties are used to define data. Defining data is done in much the same way as it is done in programming languages. Classes are defined with properties that each class have, and data is tagged with these classes to denote what the data represents. The classes can be linked, inherited and used as properties, in much the same way as in programming languages. Collections, on the other hand, are just a collection of data, they are similar to arrays or lists that contain some information grouped together. These three things are important to understand before we dive into Hydrus because they form the base on which everything else is built.

A lot of sources were considered while designing Hydrus. The primary implementations of similar applications we looked at were Levanzo for the server and Redis for the storage, although we later decided that as a starting point a simple design would be enough to get things running, studying these gave us an insight into what we needed to do.

How will it store anything?

During the first phase of GSoC, we settled upon the Spacecraft and Subsystems vocabulary as a great way to demonstrate the capabilities of Hydrus. We wanted to set up a demo wherein multiple systems would be simulating satellites running instances of Hydrus with a suitably designed API Documentation using the Spacecraft and Subsystems vocabulary. Keeping these things in mind, the first part of the coding round was to be used for designing a Hydra based REST API that can serve data in the Spacecraft and Subsystems vocabulary. It was still pretty unclear to us as to how Hydra needed to be used, so we decided to have endpoints for every class in the vocabulary and added basic GET and POST operations to each of them. One of the fundamental problems with the Data is finding the best possible way to store it, it’s one of the main reasons why people who design databases and models to store them will always be in demand. When we started our first phase, we hadn’t thought about a very generic model for storing our data. We knew we had to set up a REST API for the Spacecraft vocabulary and so we designed a database tailored to the available data[PR]. We soon realised that what we designed is not generic and would not work for every vocabulary and so we mended our ways, this was a pretty iterative process and it did not happen instantly so I cannot point you to a single commit to show the design, but here are some of them [PR, PR]. All in all, the final design that we settled on was this:

graph

You can find a good explanation of the design on this page.

Once we had the models in place, the next thing you needed was ways to add data to the model. This would have been pretty simple too if we had to do it for the Spacecraft vocabulary alone, but we needed things to be generic. We knew that if we did not have models that could store and manipulate generic data, it would be extremely difficult to implement it later on since we would be designing the server on specific models and not the generic ones. Any REST API has basically four operations that can be done on an endpoint(There are others, but they are hardly used). These are GET, POST, PUT and DELETE. While GET and DELETE are pretty self-explanatory, the use of POST and PUT is quite interchangeable and is more of a design preference where people use one to create new data and the other to modify it. For Hydrus, we made the decision of using PUT to add/create new data on the API and POST to modify existing data on it. The four basic operations that we had to do on the data were then defined and the server would use these operations based on the request[commit, commit]. We later had to define some more operations to deal with collections and other things, but the four basic operations were the basis on which all other operations were defined.

Meanwhile, we also had to create data and an API documentation for the server to use. In fact we were still unclear as to how we could write our own API Documentation, so I ended up creating a script that would parse an RDF/OWL vocabulary and create an API Documentation for each class in the given vocabulary[commit], there was also a script that would generate random data for the Subsystems vocabulary[commit]. The classes and properties were also added automatically to the database. Although this was a bad practice, we decided to go along with it as we did not have a proper API Documentation. Ideally, you would need an API Documentation to know the structure of the data that needed to be served, you can’t figure that out from the OWL declarations alone(Can’t make a good dish with just a list of ingredients, you also need the recipe). Keeping this in mind, we added another script that would use the API Documentation to figure out the classes and properties that needed to be added to the database to serve the data[commit], this was something that later helped us when we made the push towards a generic server during the second phase.

Chapter 2: Let’s get it working

We had the operations in place, we had models to store our data, we had scripts that would add the required metadata from the Vocabulary and the API Documentation and we had generators that would create random data for us. All that we needed to do now was put things in place and get it to run. This may sound easy, but honestly, this was the hardest part of phase one.

We started out with simple views, nothing generic, only the data that was needed for the vocabulary was used. We created methods that would handle requests for each endpoint separately[PR]. There was no generality in this implementation and we soon realised that it would not be very easy for us to be able to set up endpoints automatically based on the classes. We added some level of generality by treating the endpoint as an argument and then checking if it was valid or not[commit]. This process had nothing to do with the API Documentation. In fact, our server did not follow the API Documentation that it was serving until the second phase was over.

We assumed that every endpoint had a collection and then created a similar implementation for the collection endpoints. There were a lot of commits added during the end of the first phase where we had to get things working. The contexts for the endpoints and their data also needed to be generated dynamically. The objects had to be created in the same way that the API Documentation specified and we had to check if the endpoint classes were present in the Doc or not. It was a mess and we somehow managed to get everything working[PR] before the first phase ended. We setup a demo and automated the server setup using docker instances and then ran our server on AWS to serve data. We used the Hydra Console[link] to see if everything was as it should be. Apart from a few small glitches, everything seemed to be working fine and we wrote tests to check everything as well(They were passing of course).

Chapter 3: Generality

Doc Writer

As I mentioned before, none of the things that Hydrus was doing was according to the things specified in the API Documentation.There were no restrictions imposed whatsoever, even if the API Documentation specified it. In other words, no matter what documentation you gave Hydrus, you would get an API with all classes being served on endpoints with all of them having a collection and all four operations possible on each endpoint. This was not something ideal, and such a server would not be of much use. And so we decided to change things, but to do so we needed to fundamentally change the way the API Documentation was stored and used. Until now, the documentation was kept in a simple Python dictionary which was parsed once to generate the endpoints and create the EntryPoint object(The EntryPoint object wasn’t actually used at all up to now).

Enter the Doc Writer: During the second phase, we decided that a demo with the Spacecraft vocabulary would be highly complex and that we needed a much simpler demo to be able to showcase Hydrus without over complicating things and shifting the focus of the demo away from Hydra and Hydrus. We decided on a simple forest patrol simulation where we had drones patrolling a given area for forest fires(temperature readings) and a central controller at the centre of the forest. The drones and the controller would talk to each other and decide upon the actions that needed to be done. You can find details of the simulation on my blog posts or the hydra-flock repo. Now the simulation required an API Documentation to be created, and it was very difficult to create multiple classes and properties without making mistakes in between, and so I did what every Software Engineer would do: I made functions to create the Documentation for me. These functions were meant to be as tools for writing a new API Documentation. In the set of functions that I wrote, there was one function to declare a class using a python dictionary and another function that would take this dictionary and add properties to it. Our mentor(Lorenzo) suggested that we change this process by making the class creator into a Python Class and make a method in it to add properties. These changes were relatively simple and I implemented them, but fundamentally this got me thinking about the way we store the API Documentation. A Python dictionary was not very “programming friendly” i.e. you couldn’t do much with it in terms of adding functions and manipulating the data in it very efficiently(especially if the dictionary had multiple layers). So I thought of creating a class that would store an API Documentation. It was supposed to be a simple class that contained a Python dictionary and some methods to modify it, but after a lot of thinking and iterations, I came up with this final implementation:

doc_writer

The doc writer was something of a holy grail for me. I have always been the kind of person who enjoys organizing things and making them cleaner and this was something that cleared up a lot of things in Hydrus. We no longer needed the parsers and functions to create the EntryPoint and to check if an endpoint is valid or not. Anything that was related to the API Documentation was added to the Doc Writer[PR, PR]. The Doc Writer helped generalise a lot of things for Hydrus. We were no longer serving every endpoint with every operation. We could now check the API Doc efficiently to see if a given endpoint and operation pair is valid or not. We could check if the data coming in was valid or not. We could check which classes were supposed to have endpoints, which classes were supposed to have collections and only create those. All this happened in real time and all of this was running.

Doc Maker

The Doc Writer with all its organisation and abstraction introduced a certain overhead now for users who wanted to use Hydrus. People had to learn how to make documentation using the Doc Writer in order to create the HydraDoc object that the server needed to use. This was not a favourable situation, as this meant that even though we were using Hydra at the backend to do the work, the user never used a Hydra API Documentation for the server. Keeping this in mind, we set out to create the Doc Maker. The Doc Maker had one simple job: Take any Hydra API Documentation and create a HydraDoc object for it. The Doc Maker took about 3-4 days to complete[commit] and it was doing its job well. Since we had created Doc Writer to be as simple as possible, it was easy to make the Doc Maker use the Doc Writer functions to replicate the API Documentation into a HydraDoc object.

Plug and Play

The final step towards making things general was to use it as a library rather than a repository. Until now, most of the things in Hydrus were to be kept in a settings file that had to be placed in the repo that Hydrus import and use. This included the database connector, the HydraDoc object, the API name and the base URL for the API. These things need to be added to Hydrus in such a way that users need to only set a few variable in a script, and call a function to use the variable to run a server.

Flask provides us with something known as application contexts that proved useful in doing this. You can read more about application contexts here. Basically, it is something similar to the request context in web servers. The variables in the context are only accessible in the running session and can be added before the session begins. An app creator was added to Hydrus that would create an application object that could be used to run the server. This object was basically a flask app with an API overhead and app context variables for database connectors, API Documentation, server EntryPoint and the base URL of the server[PR].

All a user had to do was create a script to import some methods and the app factory. He/She could use Doc Maker or Doc Writer to create a HydraDoc object. He/She could use the API Doc parser to add the classes and properties to a database session of his choice. He/She would then add the API Doc, the database session, the Entrypoint URL and the base URL to the app objects and run the object. This would start an API server that would serve data according to his specified API Documentation.

Here is a sample of the script[link], you can find a complete tutorial of the script and how to set everything up in the wiki.

Testing, Testing

With the introduction of the plug and play model into Hydrus, it was now possible to have tests that could be run on any API Documentation. The test suite was modified for both the CRUD and the Server tests and all tests were updated to use the new plug and play model. We added a default API Documentation to Hydrus and tests were by default run on this dummy API if no documentation was given[PR].

Chapter 4: Let’s get some makeup on

By the end of two months, Hydrus was able to set up a REST API completely based on the API Documentation using nothing more than a few lines of code. In the third phase, it was time to showcase Hydrus and its capabilities. As mentioned before, we decided to use the Drone and Controller Forestry Patrol simulation for this. I will not go into the details of that in this post as this is supposed to be more about Hydrus and not the simulation. My teammate Akshay Dahiya was in charge of setting up this demo with me working on some parts of it during the last phase. You can find my blog post about my work on the simulation here and here. Akshay’s blog is also a great way for you to get insight into the simulation and its details, you can read more about it here.

During this phase, there were some issues with the way the CRUD operations were working. In the initial design, we wanted the operations to be simple and since we wanted less code to do more, we made the CRUD operations return the JSON response directly to the server when an operation was done. This was not an ideal way to handle this. The model and views of any web server need to be as decoupled and independent as they can be. For this, we decided that the operations in the CRUD file would raise errors for different incorrect responses which would then be caught by the server and dealt with accordingly. These exceptions were custom made and were added to Hydrus[PR].

Go Green

During the development of the Forestry Demo, we came across some snags with the default Werkzeug development server that flask uses. The server was unable to take the load of the multiple requests that the demo was making and would randomly freeze and stop responding. We switched over to Gevent which is a Python Greenlet based server that runs on its own process thread. These were more efficient and responsive than the old server and so we added them to the main setup script[PR]. The upside of all this was that since the app factory gave us a Flask.app object, we could use any Flask compatible server with the object and it would run fine.

Chapter 5: Let’s tighten the security

Security is one major concern that most web applications face, even REST APIs. It was something that we had been contemplating on for some time and I finally decided to implement an Authentication/Authorization method into Hydrus during the last week of Phase 3. We could not find any way we could add security using Hydra, so we decided to use HTTP based authentication protocols. A common way that many APIs add security features is by using the “Authorization” header in the HTTP request. This coupled with an SSL protocol gives a nice secure way for users to access sensitive data on remote APIs. This is usually important for APIs like that of Facebook that serves sensitive personal information that should not fall into the wrong hands.

The question now arises as to how will the user be authenticated? Well, the Authorization header supports some standard authorization methods. One of which is called the HTTP “Basic” authorization method. It is a simple way of adding security to web servers and most standard libraries(including Flask) support it. You can read more about it in the official RFC here. Here is a gist of what happens in the process:

  • The username and password are combined with a single colon. (:)
  • The resulting string is encoded into an octet sequence.
  • The resulting string is encoded using a variant of Base64.
  • The authorization method and a space is then prepended to the encoded string, separated by a space (e.g. “Basic “).

For additional security, we added a hash layer that used SHA224 hashing to hash the password and then add it to the basic header. The header was decoded at the server, the hash was generated for the correct password and compared to the password that was sent in the request. Authentication was added to all operations of Hydrus and could be enabled by setting the authentication flag in the app context. A handy method was also provided to add the user that could access the data to the database during the server setup process.

These new changes were added to Hydrus and that ended the three-month development phase of GSoC[PR].

Chapter 6: To Conclude

So all in all, here are the things that I did in chronological order:

  • Basic designing of Hydrus and the data flow finalization.
  • Database designing [PR, PR, PR]
  • CRUD Operations [commit, commit].
  • RDF/OWL parser [commit]
  • Random data generator [commit]
  • API Doc parser [commit]
  • Subsystems and Spacecraft server setup [PR, commit, PR]
  • Doc Writer[PR, PR]
  • Doc Maker[commit]
  • Pluggable components[PR]
  • Pluggable tests[PR]
  • Custom Exceptions[PR]
  • New Servers[PR]
  • Authentication[PR]

Repositories I worked on during these three months:

hydrus: https://github.com/HTTP-APIs/hydrus [commits]
hydra-flock-demo: https://github.com/HTTP-APIs/hydra-flock-demo [commits]
hydra-flock-drone: https://github.com/HTTP-APIs/hydra-flock-drone [commits]
hydra-flock-central-controller: https://github.com/HTTP-APIs/hydra-flock-central-controller [commits]
hydra-flock-gui: https://github.com/HTTP-APIs/hydra-flock-gui [commits]

It has been an awesome three months and I am sure I will continue working on Hydrus in the months to come. That’s all for this post. Cheers 🙂