Code

Node Typescript API Template with Dependency Injection

Node API template, written in Typescript, with dependency injection
Available on GitHub

Features

  • Dependency Injected Everything so everything is modular and unit testable
  • Typescript everything
  • Everything testable with emulators and Docker, many examples
  • Express API with dependency injected routes, controllers and middleware
  • Firestore with transparent validation and caching
  • Websockets driven by distributed events service
  • Fail-safe and centralized configuration loading and validation
  • Flexible and configurable rate limiting
  • Flexibility over magic

Folder Structure

Why Dependency Injection?

For those of you that have not heard the term before, dependency injection (or inversion of control), is a pattern wherein an object or function is passed it's dependencies by the caller instead of requesting them directly. This improves modularity, reuse, and makes testing much easier.

Without dependency injection, any class you create would directly require it's dependencies. This tightly binds one class to another, and means that when you are writing tests you either have to spin up the entire dependency tree and deal with all that complexity, or you have to intercept the require call.

Intercepting require calls is possible and commonly done, but not without caveats and side effects.

  • If your test blows up in the wrong way, mocked require calls may not be restored correctly before the next test.
  • Even in normal use, mocked require calls can easily contaminate other tests if not done and undone perfectly.
  • Intercepting require calls deep in the structure can be difficult and break easily and non-obviously if files are moved.
  • In the event that require-mocking fails, or mocks the wrong thing, the code will fail over to using the real instance instead of failing safe, and this can cause problems.

In my opinion, using dependency injection is just simpler for both implementation and testing.

Major Components

Talking about code is like dancing about architecture. It's better to just read/use the code. But...

I'll briefly describe each major component, and then how they all fit together.

Services

Services all follow the same signature which you can see examples of in the services/ folder.

The constructor for every service takes a map of other services this service class depends on, and a configuration object with the properties relevant to this service.

I usually make the services and config args specific to each individual service class. You can make them the same for all services to reduce boilerplate, but I find that gets confusing and just moves all that detail to the already busy serviceManager.

You don't have to pass in all of the dependencies, but my rule is that I pass in any external libraries that make an async call or do serious work; or any other services. Things like lodash or simple utilities I don't generally inject.

Models

As covered in the posts on validated models and firebase caching, models hold state and validate their contents. They differ from Requests below, in that they are primarily used to transfer state internally and save it to the db.

In this template I've include a few more concrete examples in models/ and made use of them throughout the code.

You can see in the above example that in addition to the same sort of structure I've outlined in other posts, it also includes a generateId and create function.

Wherever possible I try to generate model IDs deterministically based on immutable properties of that model.

Requests

Requests are very similar to models, with the minor difference of being principally used to transfer state externally. In a lot of cases I end up moving all request models into a dedicated repo and NPM package that is shared with the frontend.

Controllers

Controllers are one of the few places in this repo that contain a bit of hidden functionality. Examples in controllers/.

Controllers are simple classes that translate raw incoming JSON into requests or models, and then invoke service calls with those requests or models. They serve as the minimal translation layer between the outside world and the services within the API.

They generally look like this:

A couple things to note in here.

  • I use autoBind in the constructor. This is just to make referencing the attached functions easier in the route definitions.
  • I pull a user model out of request.locals. This is the user model attached to the request upstream by a middleware when the token is validated and matched to a user.
  • I don't call response methods anywhere in here

The reason that I don't call response methods explicitly is because all controllers and middleware in this API are automatically wrapped with an outer function that handles this for you. It's done by ResponseBuilder. ResponseBuilder takes whatever is returned by any controller functions and wraps it in a standard response format.

Additionally, any exceptions that are thrown anywhere during the request are caught by ResponseBuilder. If the exception has an attached code property, that is used as the HTTP code, otherwise it's treated as a 500.

Middleware

Middleware classes have the same structure and wrapper as controlllers, the only difference is that they typically attach something to the locals property of request, and then call next.

ServiceManager

The serviceManager is where everything is stitched together. In a dependency injected pattern this is often referred to as the composition root. Here all the clients (redis and firestore clients, etc), services, controllers, and middleware are created; and passed into each other to resolve their dependencies in the right order. Take a look at it to see what I mean, it's too big to post an example here.

Other Features

Events

One of the services I included is the events service. This service exists to serve as a way of notifying other services, API containers, or the UI of changes to a given model. It uses eventemitter2 and redis pubsub to do this in a distributed way, so depending on the event type, you can listen for events in your node, or any node in the cluster.

Sending an event is simple:

These events can then be listened for elsewhere:

Socket.IO

One place events are used heavily is to communicate with the UI via socket.io.

My socket.io API has controllers and middleware just like the express API. The middleware mediates authentication and the controller sends out events and responds.

In the case of this template, the controller just relays events for the authenticated user.

Rate Limiting

The rate limiting sub-system should probably be it's own post at some point, but the examples are included for reference.

They allow multiple overlapping limits to be implemented, and the associated middleware will enforce the limits and attach the headers.

Conclusion

So that is it for now in this series. If you have questions, hit me up in the issues of this repo.