Thursday, 9 February 2017

Problems with Spring Boot Eureka EIP Binding? Kill those Zombies.

Warning this is a long blog! Full details of this will be available in next week's release of Microservice Deployment, from VirtualPairProgrammers!

This week I have been mainly getting Eureka into production. Specifically, I'm deploying to AWS, in a multi availability-zone configuration. I have an Auto Scaling Group firing up two instances, each in a different Availability Zone (AZ).

This has been, to put it mildly, an "interesting challenge". Obviously Netflix have a massive production load running on it - so we know it works! - but the documentation on how the rest of us should configure it is sketchy at the time of writing.

This blog concentrates on a problem which will be fixed in the forthcoming "Dalston" releases of Spring Cloud, but it may affect those on legacy code bases. (Also Dalston is due for release later in February and I can't delay the course any longer!)

Note: much of this has been done in a limited timescale under pressure. Ideally when the pressure is off I would spend time examining more of the source code - and I will have misinterpreted or plain got stuff wrong, so do comment or contact me if you know more than me!

Problem: Zombie Instances.

The main problem I've had in preparing the architecture is zombies. Any instance which has been forcibly terminated is not expired from Eureka and remains, indefinitely, as a "Zombie" instance. This is catastrophic because these Zombie instances will continue to be called by clients, even when healthy versions of the instances exist. (We are using Hystrix which can be a hindrance if the fallback hides the problem - I've switched off all Circuit Breakers for this exercise).

(nb If an instance closes cleanly, this isn't a problem because the instance deregisters itself from Eureka and this is fine.)

The instance highlighted above was killed about an hour ago.....

And here's the live and dead (stopped) instance in the AWS EC2 Console.

Even though the instance is long dead, clients will still be handed both the live and dead instance references. As we're using Ribbon as a load balancer on the client side, about half the time we get....

Hooray! But half the time we get....

Boo! Recall that I've temporarily disabled circuit breakers, so no fallbacks are available, making the crash more obvious.

Possible Solutions:

Self Preservation Mode.

I won't bother re-describing self preservation mode here as the references 2) and 3) cover this well.

In development where you only have a handful of instances, this mode is a right pain in the neck because a single instance deregistering is interpreted as a network catastrophe, and Eureka stops de-registering instances, on the (for us, bad) assumption that the instance is still there, it's just that Eureka can't see it for some reason. So it will continue serving details of this instance to clients.

For me, I don't think this was the problem. I had other instances registered which stopped the threshold kicking in - *I think*. Just to be sure, I decided to drop the threshold right down (I never saw the red emergency warning so I assume self preservation wasn't happening). In desperation, I decided to switch this mode off altogether - you get a red angry warning for doing this, but who cares?

The follow properties achieve this:

# Make the number of renewals required to prevent an emergency tiny (probably 0)
eureka.server.renewalPercentThreshold=0.1

# In any case, switch off this annoying feature (for development anyway).
eureka.server.enableSelfPreservation=false

Q: Is the self preservation feature really all that useful? Obviously Netflix thinks so and I believe them. But I'd like to get a handle on what the use is. In the event of a network partition, causing Euerka to expire all of the instances, won't the clients continue using their own cached instances anyway? The "emergency" doesn't sound that serious to me, and certainly no worse than clients getting references to zombie instances. I need to investigate this more deeply when the pressure is off.

2) EIP binding causes incorrect metadata - bounce the server.

This was the real reason.

Eureka instances need a fixed IP address or domain name. Usually you would achieve this by setting up a Load Balancer (which in AWS is given a fixed domain name - it's ip address may vary over time but you are insulated from that). Then you could place the Eureka instances behind the load balancer.

However, Eureka has its own scheme, as described in reference 5).

1) Reserve yourself a set of Elastic IP (EIPs) addresses from AWS, one for each of your Eureka instances (I need two). This gives you an IP address permanently allocated to your account, and we can freely associate them with any of our EC2 instances.

2) Configure your Eureka Server with a comma separated list of all of the EIPs. Eureka insists that you use the full DNS style name:

eureka.client.serviceUrl.defaultZone=http://ec2-35-166-222-19.us-west-2.compute.amazonaws.com:8010/eureka/,http://ec2-35-167-126-96.us-west-2.compute.amazonaws.com:8010/eureka/

Typically, you'd bake the Eureka code base onto an AMI so that you can start up new Eurekas easily from an Auto Scaling Group.

3) Start up an EC2 instance, from the AMI containing your Eureka image. It's IP will, as usual, be dynamically allocated by AWS, so it will be something like 55.34.23.123 (whatever).

4) Now for the weird bit....as part of the startup process, Eureka will grab one of the EIPs from the list given in step 2), and it will re-bind the IP address of this EC2 instance to that EIP. That's right, the IP address of this server will be changed, on the fly, during the startup of the instance.

5) The other instance will do the same thing - but in this case it will find that the first address in the list has already been taken, so it then tries the second item in the list. It will successfully bind to this IP address.

And, hey presto you now have two Eurekas, peers of each other, each having the correct IP addresses that we knew about in advance.

For the clients, we can give them the exact same config (this is convenient because we can put this property in the global config server instead of having to repeat it in every single microservice):

eureka.client.serviceUrl.defaultZone=http://ec2-35-166-222-19.us-west-2.compute.amazonaws.com:8010/eureka/,http://ec2-35-167-126-96.us-west-2.compute.amazonaws.com:8010/eureka/

The client can now choose either of these URLs to work with as its Eureka server. According to the docs (although I haven't had time to verify this), the client will favour a Eureka server in it's own availability Zone - this is presumably for optimal performance. However this isn't critical, if that server isn't available it will fall over to another one from the list, ie from a different AZ. This will incur slower performance, although the AZs have low latency connections with other AZs in the same region.

(Note that the Eureka replicas aren't a master/slave arrangement, its peer-peer so there's no "main" Eureka server. For a client in AZ us-west-2b, their preferred server is in that zone but it's just a preference).

So that's how Eureka is designed to bootstrap itself, but it seems this is the root of the Zombie problem.

Here's my server, which after booting up grabbed itself the EIP 35.167.126.96:

But check out the instance info at the bottom:

The public-ipv4 and public-hostname are wrong - these were correct when the instance was booting up, but after the EIP bind, these values are now redundant.

So what? Well, I'm not sure. The public hostname is important to Eureka, and I can see how it would prevent the servers from replicating with each other - because Eureka uses the hosts names to find its replicas:

This Eureka instance thinks its peers are unavailable, because it's looking for host names containing the 35-xxx-xxx-xx series of numbers. But as shown above, the hostnames are stuck on the "old values of "54-xxx.xxx-xxx".

Confusion: I would understand if this stopped replication working - actually replication is working fine despite the UI above indicating unavailable replicas.

Now, I do not understand why (again, I need to take time to step through source code), but what is demonstrably true is that this misconfiguration causes the zombie instances. Maybe someone out there can explain this.

A quick fix for the problem is to simply log on to the Eureka instances and restart the spring boot application. This will causes their instance data to be refreshed, this time to the correct values:

The instance info after a restart of the Eureka App - as an EIP bind wasn't necessary this time, the IP address hasn't changed and the values populated on startup are now correct.

And the replication is now correct - we can see the "other" Eureka instance listed as available.

But the real difference (at last, I get to the point!!!) is that instance de-registration is now working. Let's start up two instances of the microservice:

Above: two instances running in EC2

Above: correctly registered in Eureka....

Now I'm going to kill a random instance and start a stopwatch:

This time, success! After 2mins42s (more on the timings later) I saw the cancellation run through the server log:

Which is confirmed on the UI:

Note that this does not mean that the clients will now have clean, non-zombie references. Eureka actually sends a Json document back to clients which it caches (I think for 30seconds), so it may take up to 30 seconds before the zombie disappears from data sent to clients. And then, Ribbon on the client side has a cache as well. So many caches and often I've banged my head thinking everything is unreliable when actually it just takes times for the caches to align.

A minute later and every refresh on my webpage is showing a map again. Joy.

So, I cannot explain why the bad instance information is preventing Zombie expiration, but it does. Any ideas???

Refreshing the Instance Info Automatically

Trouble is, we can't rely on being able to "bounce" the Eureka app every time it starts up. I did consider adding additional steps to my Ansible script; I could:

- fire up Eureka from AMI

- Wait for port 8010 to respond on the EIP

- Restart Eureka App

But that wouldn't work for when the server is started from an Auto Scaling Group - and that's a requirement, if Eureka EC2 instance terminates for any reason, it needs to be auto started again. The ASG will just restore the AMI which will cause an EIP bind to happen, but unless we do something clever (a script that triggers after an ASG event? My head's hurting) the instance information will be in that bad state.

So, thanks to the GitHub issue at reference 1) (EIP publicip association not correctly updated on fresh instance), Niklas Herder (to whom I owe a drink, or something) suggested using a timer in the main class of the Eureka App to automatically refresh the instance's info if that info has changed. It works perfectly.

I've modified his suggested code slightly to remove anything we don't need, and I'm left with this....

 @Bean
 public EurekaInstanceConfigBean eurekaInstanceConfigBean(InetUtils utils) {
  final EurekaInstanceConfigBean instance = new EurekaInstanceConfigBean(utils)
  {
   @Scheduled(initialDelay = 30000L, fixedRate = 30000L)
   public void refreshInfo() {
    AmazonInfo newInfo = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
    if (!this.getDataCenterInfo().equals(newInfo)) {
     ((AmazonInfo) this.getDataCenterInfo()).setMetadata(newInfo.getMetadata());
     this.setHostname(newInfo.get(AmazonInfo.MetaDataKey.publicHostname));
     this.setIpAddress(newInfo.get(AmazonInfo.MetaDataKey.publicIpv4));
     this.setDataCenterInfo(newInfo);
     this.setNonSecurePort(8010);
    }
   }         
  };
  AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
  instance.setHostname(info.get(AmazonInfo.MetaDataKey.publicHostname));
  instance.setIpAddress(info.get(AmazonInfo.MetaDataKey.publicIpv4));
  instance.setDataCenterInfo(info);
  instance.setNonSecurePort(8010);

  return instance;
 }

It's a hack, sure, but it works! Forgive the hardcode of the 8010, I haven't got around to polishing this off yet.

Just to prove it, I'll run through the exercise again:

After baking the new "refresh" code into an AMI, I've started an Auto Scale Group which has triggered the startup of two new Eureka Servers. Notice their IPs are in the 54.xx.xx.xx range.

... a few minutes later and their IPs have changed:

The all important instance info is up to date:

And all is well with the world. This time I killed an instance and after 2mins 57seconds the registration was cancelled.

This problem will be fixed when the Spring Eureka migrates to the underlying Eureka implementation version 1.6. The Dalton release train will be the first release containing the fix. We're still on Camden and for "reasons" I want to stay with that for now (mainly, I don't want to delay and I don't want to change what we have). But all of the above may not be an issue for you.

In the next blog post, I'll address the other pain in the neck with Eureka, which is slow registration and de-registration.

I'm indebted to the following resources

  1. github.com/spring-cloud/spring-cloud-netflix/issues/1321, especially the contribution from Niklas Herder (https://github.com/herder)
  2. github.com/spring-cloud/spring-cloud-netflix/issues/373 - Bertrand Renuart has described many of the internals excellently - this would form a good basis for an improved set of official documents.
  3. At the time of writing, Abhijit Sarkar is clearly going through similar pain as me and he's writing up an excellent blog "Spring Cloud Netflix Eureka - The Hidden Manual". blog.abhijitsarkar.org/technical/netflix-eureka/. This seems to be a work in progress, I hope his work will become the actual manual before long.
  4. The Spring Cloud Eureka documentation has a few references to production settings.
  5. This is the only guide I know of to EIP binding, although the settings given here don't seem to work properly in Spring Cloud Eureka.

There is also the Netflix Wiki at https://github.com/Netflix/eureka/wiki which is a reasonable start, but many of the details are vague and confusing. Some details are not relevant to the Spring Cloud version of Eureka. The document starts by advising to create multiple properties for each AZ; I've failed to get Spring to pick this up and in all examples I've only seen a list of addresses (comma separated) on the eureka.client.serviceUrl.defaultZone property. I've gone with that and it seems ok but I'm interested to find out more.

Sunday, 16 October 2016

Spring Boot Crashing Due to Unsatisfied Depedency?

We've just had a report of a possible bug on our Microservices course - your web application might fail to start up with something like the following in the Stacktrace:

Launcher.java:49) [spring-boot-devtools-1.4.1.RELEASE.jar:1.4.1.RELEASE]
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: 
Error creating bean with name 'positionTrackingExternalService': Unsatisfied dependency expressed through field 'remoteService'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.virtualpairprogrammers.services.RemotePositionMicroserviceCalls': FactoryBean threw exception on object creation; nested exception is java.lang.NullPointerException

This will happen if you use Spring boot 1.4 - the original course code used Boot 1.3.

So, as a quick (and unsatisfactory) fix, you can drop the version of boot down to 1.3. However, the root course appears to be classloader related (I haven't had time to fully investigate it, although the bug report here gives a big clue) and is triggered by the presence of the devtools dependency. We added that back on the first course to enable automatic container reloading.

Boot maintains two classloaders, and on a change to the code, it only needs to restart one - this is a speed optimisation. I have no idea why this break has happened in Boot 1.4, but if you have this problem, the solution is to remove the spring-boot-devtools dependency from your POM and all will be well. Of course, you'll now need to manually bounce the server each time you make changes.

If I get time to investigate this more thoroughly, I'll make further posts but hopefully for now this will stop anybody hitting a brick wall on the course!

Wednesday, 12 October 2016

Spring Boot Microservices

Here's the opening chapter from the new Microservices course. As always I talk far....toooo.....slowly...... so you can hit the speed up button!

Full closed captions/subtitles are available on the full course from the VirtualPairProgrammers.com site.

Thursday, 6 October 2016

Microservices with Spring Boot: Release Date October 11 2016

Microservices - it's nearly here!

We're approaching the end of an intense, exhausting but fun few weeks recording the new release, Microservices with Spring Boot. I'll put up a proper announcement with a preview video when it's finally out.

In the meantime, a few words about the thought process behind the course: whilst internally at VirtualPairProgrammers, we use Kubernetes for much of the orchestration of the microservices, we wanted a course that would integrate closely with our Spring and Spring Boot lines, which took us down the path of using Spring Cloud.

Spring Cloud, as you'll see on the course, is a superb collection of components, mainly derived from the work done at Netflix, which offers features such as service discovery, load balancing, fail safety/cirtcuit breaking and so on.

Whether you want to use Spring Cloud or not, the course is an in-depth starter to microservices. You'll be managing a small set of microservices which collaborate to provide a vehicle tracking front end:

What's In?

  1. An introduction to Microservices
  2. Messaging with ActiveMq
  3. Running a Microservice Architecture
  4. Service Discovery with Netflix Eureka
  5. Client Side Load Balancing with Ribbon
  6. Circuit Breakers with Hystrix
  7. Declarative REST with Feign
  8. Spring Cloud Config Server

What's out?

This is only a "getting started", and I realise that the course ends at a point where there are still many questions to answer. Mainly, how to deploy the architecture to live hardware - as discussed in previous blog posts, automating the entire workflow is so important.

For this course, I wanted life to be as simple as possible - the downside of this is that it did mean running a lot of Microservices locally (all in separate eclipse workspaces), and a lot of faffing stopping/starting queues, etc. In a way, I hope this helps - it should make people realise that managing microservices is much harder than managing a monolith.

This was a hard decision to make for this course - I wanted it to be as production grade as possible (read: not a hello world app), without it being daunting and off putting. I think at certain points in the course, it does get hard when we're constantly switching from one microservice to the next, stopping and starting services and generally trying to juggle several things at once. All of this in a true production would be automated and scripted - but I didn't want that to get in the way of the learning.

So there will be a follow on, where we use configuration management to automate the deployment to live hardware (it will be AWS as that's what we're comfortable with). We'll also use Virtual Machines to allow developers to run full production rigs on local development machines.

Also missing from this course, which I very much wanted to include, were Spring Cloud Bus (Automatic re-distribution of system wide properties) and Zuul (API gateway); these were lost mainly due to timescale pressures, and that the sample system couldn't really support their use. We'll be looking at these topics in the very near future.

Also, since this is a fast moving area, I expect that we will have to update the course in the near future anyway, so we'll be releasing much more on this topic soon!

The "Microservices part 2: Automating Deployment" (working title) is aimed to be out late November/early December. It'll feature Ansible, Docker (probably) and lots of automated AWS provisioning, including a healthy slice of AWS Lambda. Can't wait!

Wednesday, 21 September 2016

JavaEE Delayed Again.

It has been announced that JavaEE 8 has been delayed, yet again.

There's something very odd going on at JavaEE HQ.

Article from The Register here, and InfoQ here

Oracle are selling this as a good thing as it gives them a chance to "cope with the prevailing winds in development and deployment", which I read as "we're going to get microservicey and we're going to cram in some Dockerish type stuff as well".

Why is JavaEE even needed today? Some organisations like the stability and security that a standards process offers. But all of the components of JavaEE, such as CDI, EJB, JMS, JAX-RS etc, etc, are all standards in their own right. We cover these in our excellent Wildfly Series of training courses.1

Is an overarching "umbrella" standard, keeping an almost uncountable number of unrelated APIs in lock-step really relevant in today's agile development world?

Rather than waiting until 2017 (get real, it will be 2018 at earliest), Java(SE) is a leading cloud and microservices solution TODAY, using a multitude of battle-tested industrial strength libraries. Some of those fall under the JavaEE umbrella (JAX-RS, JMS or even EJB), whilst some come from commercial companies donating their software to the community (such as Netflix OSS 2, which we'll be covering in the upcoming Microservices course).

While they're stuffing in new toys, they're also cutting. With all the precision of a newly qualified surgeon who's just celebrated by downing 8 shots of absinthe. The proposed standard (JSR 371) for an MVC framework is being dropped - because "microservices are often headless, so an MVC framework is not needed" (they claim). But as far as I know, JSF will remain. I smell a rat here, JavaEE has never been afraid of shoving in niche APIs. And MVC wouldn't be niche - although it would be (at least) 10 years too late, a specified MVC would have been a valuable counterpoint to Spring-MVC.

I admit I'm annoyed by this. I like JSR371, it's not glamorous but it would have done the job. It cleverly uses the same patterns as JAX-RS so it's quick to learn. It has a reference implementation here so technically you can use it today. But if you're on a JavaEE project that insists on standards, you have to wait until it's absorbed into JavaEE, which at the time of writing is quarter-past-never. Because, by their own admission, it's not trendy enough.

Are JavaEE really saying that nobody's building web applications any more? JavaEE web developers are currently stuck between bringing in a non-standard web API (choose from hundreds) - hence nullifying the whole point of JavaEE, OR they are stuck with using JSF, a terrible tool for building action-driven web applications. (It's good for component based non-navigational web apps, and it's much better than it used to be. But still, not the tool I'd chose unless forced)3.

To add to the craziness, there's also talk of JMS being frozen at version 2.0, because "it's not relevant to the cloud". Which makes a fool of me, because I claim that messaging using a guaranteed delivered mechanism like JMS is a great way of calling Microservices. Ok, well I'll carry on doing "irrelevant" stuff, deploying working software while JavaEE carries on ruminating behind their closed doors.

This is, in my opinion, extremely bad PR for Java, making "us" look like a committee led dinosaur. The true picture of Java development is extremely exciting, and non JavaEE libraries continue to lead the way in pushing the industry forward. Java will never be cool again (oh for the heady days of 1996), but it (and the JVM) delivers big and I hope this will continue. With or without Oracle's "help".


1. [There's nothing wrong with application servers, we're using Wildfly heavily right now. We find you can be very agile and it's a great fit for Microservice development (as is Spring Boot)]

2. [I've seen proposals for client side circuit breaking, externalized configuration stores and health checking for JavaEE 9, which suggests that are going to absorb the NetFlix OSS stack into the standards - again, by then, the state of the art will probably have moved on by a leap or bound or both]

3. [I almost forgot JavaScript front ends, which of course is a very common choice. They could have used this as their excuse not to bother, but then they'd have to admit that JavaScript exists and can do some things better than Java]

Tuesday, 30 August 2016

Microservices Part 3 - How to call a Microservice

I'm busy working away on a new Microservices using Spring Boot course for Virtual Pair Programmers. I hope my next blog post will be a draft running order with an estimated release date: in the meantime as promised I'm going to look at how to call a Microservice.

Along the way I'll point out how Spring Boot can help - at the same time this is helping me to decide what needs to be on the new course.

As per last week, my old monolith (which is eventually going to be broken down until nothing remains) is going to call a new Microservice, the "VAT service". It has a single responsiblity, to return the Tax due on an amount, based on the country of residence for a customer.

It's easy to make these things simple on a training course - in real life various governments have conspired to make VAT a living nightmare, so really this service has to deal with IP addresses, Physical Address and location of requesting bank. But that's ok, the microservice still has a single responsibility. Don't be afraid to build microservices which may feel trivially small - that's part of the point (and they tend to grow anyway).

Obviously, we need to call this microservice. How?

Approach 1: The easy way - a naked REST call

Although I advised in the previous blog that you can and should consider other remoting solutions such as gRpc, let's assume we're going with REST. It's kind of standard.

As you're reading this blog, you're probably a Spring fan, so let's also assume that the caller (client, at present the monolith) is using Spring. So the natural choice is to go for the RestTemplate.

We've covered the RestTemplate extensively in our Spring Remoting course, so I won't labour the point. But something like:

String countryRequired = "GBR";
Percentage vatRate = template.getForObject("http://localhost:8039/vat/{country}",Percentage.class,countryRequired);

Almost a no brainer. Things to consider:

  1. We need to get rid of the hardcoded URI - ideally we would have a service discovery solution as part of our architecture. I touched on this briefly last time, but Spring Boot has a plugin which wraps up the very easy to use Eureka, which was originally built by Netflix. Although we use Kubernetes on our live site, I've decided that as Eureka is so tightly integrated with Boot, it would be a shame not to cover it. So it's going to be on the course!
  2. What happens if the VAT service is down? In a Microservice architecture, you must assume that at any one time, at least one service is likely to be unavailable. Again, further Netflix components can help, and Spring can easily integrate with Ribbon (for load balancing) and Hystrix (for Circuit breaking - more on circuit breaking in a future blog post).

Together, these two sub-frameworks can lead to a very robust architecture. I'll be making sure that our practical work on the course explores this in full.

Approach 2: using Feign to hide the remote call

Naked rest calls are all well and good - they're simple - but I always get the feeling that I'm breaking an abstraction. I don't want to feel that I'm making a Http call(*) - as a business programmer, I'm calling a service and that's how I want to think in the code.

(*) Note: this will make some people angry. When working on distributed systems, we must never forget the Fallacies of Distributed Computing, in this case we must never forget that we are making a remote call and it can 1) fail and 2) take a long time. Many argue that by abstracting away the remote call, we are making it easy to forget this. It's a good point which I accept and remain mindful of.

It would be great if I could call this service using idiomatic Java/Spring/Dependency Injection, a little like this:

public class BlahBlah
{
   @Autowired
   private VatService remoteVatService;

   public void billCustomerOrWhatever( .. params .., String countryOfOrigin)
   {
      Percentage vatRate = remoteVatService.findVatRateForCountry(countryOfOrigin);
      // blah blah blah      
   }
}

And we can! Yet another element of the Spring Cloud library is called "Feign". I admit I didn't know about this until recently (how do you keep up with Spring when it expands faster than my brain cells can work?) - I'll be covering it on the course but it's as simple as declaring the Interface in the usual Java way:

public interface VatService
{
   public Percentage findVatRateForCountry(String country);
}

Rather like with Spring Data JPA (which I covered in the Spring Boot course), you do NOT implement this interface - it's done for you via a generated runtime Proxy.

You do need to add a few annotations, so that the generation knows how to translate the Java into REST calls. Cleverly, we use standard SpringMVC annotations (of course usually these annotations are used when defining the server side - this is the first I can think of where I've used the annotations client side!)

@FeignClient("/vat")
public interface VatService
{
   @RequestMapping(method=RequestMethod.GET,value="/{country}")
   public Percentage findVatRateForCountry(@PathVariable("country") String country);
}

Beautifully, this all integrates with the Ribbon load balancer that I mentioned above, so if we've replicated the service on multiple nodes, it's easy to provide failover and fallback behaviour.

Approach 3: Use Messages

The call to the VAT service probably needs to be synchronous, because we absolutely need to know the answer before the user can proceed with what they're doing. But in many cases, a message driven solution is another way of building a robust system.

A working example is that our system needs to record viewing figures. Every time a video is watched by a subscriber on our site, we record this as a "watch". We use the data to decide which courses are a hit, and we can also identify unusual viewing patterns (this is a polite way of saying "we can find out who is using site scrapers").

If we have a microservice which is responsible solely for recording viewing, we can of course use REST to log the view.

Notice the new service has its own private database, as described in part 1. We've chosen MongoDb - it has its detractors, but it will work well for this type of data. We can easily store large blocks of the viewing figures in memory, so it's going to be easy to do fast calculations and aggregations. When this was handled by the monolith in MySQL, doing even basic calculations was grinding the whole system to a halt. One of the joys of microservices is we can make these decisions without too much agony - if it doesn't work out, we can tear down the whole microservice and replace it with a different solution. I call this "ephemerality" but it's a pompous word so it will never catch on.

This call doesn't need to be synchronous - the video can safely play even if we haven't yet logged the viewing. There are ways of making asynchronous REST requests (Spring features an @Asynch annotation which starts a new thread - I've never covered this on any course, but I will maybe get around to that someday).

But this is a great use for messages. Instead of making a call to the service, we could just fire off a message to a queue. We don't care who consumes the message - we just know we've logged that message.

We would then just make the ViewingFigures service either respond to messages on that queue - or, even better, we could use a Topic instead of a Queue. With a Topic, multiple consumers can register an interest. So, in the future, if new services are built which are also interested in the EVENT that a video has been watched, well, they can subscribe to that topic as well.

A Topic has multiple subscribers which can be added over time. Note that on AWS this is implemented using the SNS, Simple Notification Service

This gains us the robustness that we desired above, without the need for extra plumbing such as circuit breakers and load balancers. If the viewing figures service goes down, it's no problem to the monolith as it isn't calling it - it's just sending a message to a queue or topic. The queue will start to backlog the messages until the service comes back up again, and then the service can catch up on its work.

Things to think about: it is essential to ensure the queue has an extremely high uptime. With a few mouse clicks (* see footnote), Amazon SQS automatically provisions a queue which is transparently duplicated across multiple Availability Zones (data centers). You can't assume it will NEVER go down and you must code for this on the calling side. In this case, I would log the exception and carry on, it's no disaster if we miss some viewing records.

Although we've covered messaging in standard JavaEE (and we have a course covering this on WildFly releasing soon), for some reason we've never covered messaging for Spring. So that's going to go on the new course as well!

As always, I'm sorry for the long blog post, I didn't have time to write a shorter one - I'm busy working on the new course!

(* footnote) Edit to add: Ahem, I meant, of course - "with a simple script, under source control, using a tool such as Puppet, Chef or Ansible". That needs to be a course too!

Wednesday, 3 August 2016

Microservices, Part 2 - how to deploy

Here's the next in a short series (I think it will be three parts) where I'm musing about Microservices. My challenge is that VirtualPairProgrammers absolutely needs a course on it, but I'm not sure what form that will take. After writing these first two parts, I'm beginning to think that for development, we just need to extend our existing Spring Boot and JavaEE/Wildfly courses, but a further course on Deploying Microservices will be needed. This blog post will focus on that.

In part 3, I'll return to the "dev" side of things and look at how using events can make your system more loosely coupled.

In Part 1 I described the overall concepts in Microservices, and it turns out to be not too complicated:
  • Services aligned to specific business functions
  • Highly cohesive services and loose coupling between them
  • No integration databases (meaning each service will typical run its own data storage)
  • Automated and continuous deployment.
Actually implementing a microservice is not too hard. Designing an overall architecture where the services collaborate to achieve an overall goal, that's a bit harder - but what's really hard is deploying a microservice architecture. To put another way, the real magic in microservices is in the "Ops" rather than the "Dev".

Unless you're planning on rolling your own infrastructure tools (Netflix did this and they've opened sourced them - more later), you're going to rely on open source tools, and lots of them. There are hundreds - probably more - that you could consider. It's overwhelming, and every day, new tools are emerging. To try to get you started on the Microservices path, this article is going to look at a very simple microservice and the not-so-simple tools needed to get it running.

Note: this article is not intended to be authoritative. These are just the choices we've made and the reasons why. There will be other solutions, and plenty of tools that I've never even heard of. I consider Microservices a journey, and our system is certain to evolve dramatically over the coming years.

Also, I'll be sticking with the tools I know - so for the implementation of the actual service, I'll probably use Spring Framework, JavaEE or associated technologies. If you're coming from other languages, then of course you will have your own equivalents.

Our System at VirtualPairProgrammers.

As described in part 1, our website is deceptively simple - it's a monolith, and behind the facade of the website we're managing well over 20 business functions. It has worked well for us, but it was getting harder and harder to manage. So we decide to migrate to a microservice architecture.

But so much work to do! At least 20 microservices to build! Where do we start?

Well, one appealing thing about microservices (for me) is you don't have to do a big bang migration - you can slowly morph your architecture over time, breaking away parts until the legacy can be retired or left in "hospice" mode.

So we started very simply - we have a business function where we need to calculate the VAT (Value Added Tax*) rate for any country in the world. In the monolith, this code is buried away in a service class somewhere - it's a great candidate to be its own microservice:


Simple to describe, but actually deploying this raises some questions:

How to implement the service?


As stated, the "dev" part isn't too hard - Spring Boot is a perfect fit for Microservices, and we're very experienced with it here at VirtualPairProgrammers. But really there is infinite choice here, you could for example implement this as an EJB in a Wildfly container. Following the guidance in part 1, this service will have its own data store, and it doesn't really matter what that is. For a simple service like this, we might even keep the data in memory and simply de-deploy the service when VAT rules change.

Should the VAT Service be deployed to it's own Machine (Virtual Machine)?

As mentioned in part 1, we want to be able to maintain total separation of the services, but at the same time we don't want to incur the cost of running a separate Virtual Machine. This is where containerization comes in.

A container differs subtly from a Virtual Machine. A VM has it's own operating system, but a container shares the host's operating system. This subtle change has major payoffs, mainly that a container is very lightweight, fast and cheap to startup. Whereas a VM might take minutes to provision and boot, a container is up and running in seconds.
A traditional set of Virtual Machines - each VM has its own Operating System...

...but containers share the host's operating system

The most popular containerization system (quite over hyped at present) is Docker. This book is an excellent introduction, it's a practical book and definitely helped us to get started:


How do we call the now-remote service?


The usual answer here is to expose a REST interface to the VAT service. This is trivial to do using Boot or JavaEE.

But in this specific example, we are NOT exposing this API to end users - it is only going to be called from our own code. So, it's actually not at all necessary to use REST. There will be many disagreements here, but you could certainly consider an RPC call! RPC libraries such as Java's RMI or more generic ones such as gRPC (http://www.grpc.io/) have a bit of a bad name, partly because the binary formats are non-human readable. For service-service APIs, actually RPC is fine - they're high performance and work well.

(Human readable forms, mainly JSON over HTTPs [aka REST if you're not Roy Fielding] are the right choice for APIs that are being called by user interfaces, especially JavaScript frameworks).

(Something to think about here, we've replaced a very fast local call with what is now essentially a network call. Remember this will be an internal network call - see the stackexchange discussion here.)

How does the "client" (the monolith) know where the microservice is?


It wouldn't be a great idea to have code like this:

// Call the VAT service
VATRate rate = rest.get("http://23.87.98.32:6379");
I hope that's obvious - if we change the location of the service (eg change the port it is running on), then the client code will break. So: hardcoding the location of the services is out.

So what can we do? This is where Service Discovery via a Service Registry comes in.

There are many choices of Service Registries. Actually Java had a solution for this back in the 1990's, in the shape of the JINI framework. Sadly that was an idea ahead of its time and never caught on (it still exists as Apache River, but I've never heard of anyone using it).

More popular - Netflix invented one for their Microservice architecture, which is open sourced as Eureka. I haven't used this, but I understand it is quite tied to AWS and is Java only. Do let us know if you've used this at all.

We are using Kubernetes (http://kubernetes.io/) because it provides a service registry (by running a private DNS service), and LOTS more, particularly...

What if the service crashes?


It's no good if the microservice silently falls over and no-one notices for weeks. In our example, it wouldn't be too bad because the failure of the microservice would lead to a failure of the monolith (we'd see lots of HTTP 500's or whatever on the main website - but once we've scaled up to multiple services, this won't be the case). This is where orchestration comes in - in brief this is the technique of automatically managing your containers (orchestration is a bigger concept than this, but for our purposes, it is containers that will be orchestrated). The previously mentioned Kubernetes is a complete Orchestration service, originally built by Google to manage (allegedly) 2 billion containers.

Kubernetes can automatically monitor a service, and if it fails for any reason, get it back running again. Kubernetes also features load balancing, so if we do somehow manage to scale up to Netflix size, we'd ask Kubernete to maintain multiple instances of the VAT service container, on separate physicals instances, and Kubernetes would balance the incoming load between them.

There aren't many books available on Kubernetes (yet) - but at the time of writing, the following book is in early release form:


So the overall message is, you're going to need a lot of tooling, most of it relating to operations rather than deployment. In Part 3 (probably the final part) I'll look at another way that services can communicate, leading to a very loosely coupled solution - this will be Event Based Collaboration...