tombola careers
tombola careers


Blog | Tombola Careers

tombola blog
< back

re:Invent 2021 outside the talks

Posted by Ryan Coates
There was plenty of fun to be had outside the talks and honestly, I was blown away with everything on offer and couldn’t hope to see everything.

Outside of talks?



The AWS bugbust was a lot of fun, Amazon CodeGuru (basically a code analyser) has reviewed a large codebase and spat out a huge number of “bugs”. Developers could just grab said bugs and bust them; they were all pretty small low effort fixes that tend to litter a larger codebase. I enjoyed the gamification, having a leader board and encouraging platform was fun. I think it would be an engaging and enjoyable process to include in the team. We could get a list of low effort bugs, the ones that sit in the backlog deprioritised until they bite us and plough through them in a couple of days.



Loads of cool IOT at the builder’s fair one of particular note was the automated beer brewing. As an avid homebrewer this piqued my interest, disappointingly I can’t seem to find a blog on the project, but I will try and summarise.


Brewing beer is basically four steps

  1. Mash – the process of extracting fermentable sugars from your grains by steeping in water
  2. Boil – boiling the sweet wort (extracted fermented sugary liquid) and adding hops at different times to add flavour and bitterness.
  3. Ferment – the process where yeast turns sugar into alcohol
  4. Condition – the painful period needed after fermentation for the beer to clear and flavours to mature

The automations were really focused on the fermentation process, a Bluetooth float was added to the fermenting beer this regularly sent data about the progress of fermentation by measuring the thickness of the liquid (sugar is thicker than alcohol). This data was then sent via a raspberry pie to AWS IOT core, which sends the data via a lambda to a database, you can then use Graphana to visualise the data and see in real time from anywhere how your beer is doing. The temperature is also measured in a similar way and a head pad is triggered when the temperature drops below a threshold. Finally, a pump is triggered when the beer reaches its target gravity and it can be automatically transferred to a keg or secondary fermenter. So really you could set this up so you get the beer fermenting and come back to finished beer.

Well, this is all quite interesting it’s doesn’t do anything to solve the real problem with homebrewing, cleaning.

There are some really fun use cases for IoT core, keep an eye out here for re:invent 2021 builders fair projects

Gameday: Secure legends

We had a really good time at the gameday, it’s a scenario based free for all of a workshop. We were presented with a fictitious business and AWS account that had not invested at all in security or good practices. We needed to work in a team to rapidly fix all the issues.

The gamification made it really fun and working and coordinating in a team really helped us progress and enjoy the experience. I spend the majority of my time debugging and fixing a ruby on rails codebase and improving a devilishly destructive ci/cd pipeline.

We did chat about the possibility of running one of these in the office, they were fun and provider some real-world hands-on learning. Sometimes workshops can feel very paint by numbers this was free flow chaos!

Outside the conference we had an incredible time! Las Vegas is an incredible and ridiculous city, I spend a good portion of the week wondering around the strip slack jawed staring at the lights, fountains, fire and madness! There were loads of sponsored events we popped along to on the evenings, failing to hit golf balls at top golf and enjoying the beer and views at Beerpark to name a few.

Re:Play is the after party and of course the scale was ridiculous, there were loads of activities  like dodgeball, arcade games and a silent disco as well as live music which blew my mind.


While I learned a good amount and had some really good chats with others in my industry at re:invent I maintain the real value of getting away for a conference is to be in a work mindset outside of work. Being away and chatting to other members of the team is really invigorating, I’ve come back completely amped to make the world a better place!

It was a blast, great going away with some great people I really don’t chat to enough, what an incredible perk to get away like this, I am spoilt rotten at tombola!!


Also I met Marshall so my son Dylan was happy!

 Marshall in Vegas

read more

re:Invent 2021 key themes and talks

Posted by Ryan Coates
aws re:invent 2021
Well it’s that time of the year and I’ve had the incredible fortune of a 7 day trip to Vegas for AWS re:Invent. 6 of us went, 6! You know you’re working somewhere special where 6 employees get a trip to Vegas!

re:Invent is a conference at a scale that I couldn’t really comprehend, kinda like Las Vegas, with a huge amount of talks, workshops and events it was really hard work making the most of it, but we gave it a good go. I’ll take some time to summarise some of the key learnings and standouts of my wonderfully exhausting trip.


Key themes

I generally find every conference I go to there are themes and buzzwords that are very prevalent, this was no exception.

Like ourselves many companies start off with a big monolithic codebase with a big enterprise database like SQL, and now as their applications begin to decompose into smaller more manageable chunks so should the database. AWS are pushing the agenda of using the right database for the job, one size really doesn’t fit all. There are basically aws database types for all your data needs and a series of migration tools to help you get there. In the new world of smaller services, we have the flexibility to use the best database for the job, we need to use it wisely.

Making use of huge amounts of data is a growing problem/opportunity in most businesses, we are no exception.  There were a huge amount of tech companies and consultancies offering analytics as a service and AWS are enhancing there offering by essentially making redshift and kinesis “serverless”.

AI and ML have changed significantly in the past few years, there are a huge number of services that can hold your hand and basically train your models for you, it seems that generally use cases for ml are generic enough that you can pick a template off the shelf. It’s incredibly accessible

Cool tech

Migration hub

A big focus of the talk has been about app modernization (moving all your stuff onto cloud native technologies for the full cloud shackled experience). There are some interesting tools in preview that can help with this.

Migration hub refactor spaces lets you define applications and start rerouting traffic to microservices using the strangler pattern, you could do this with load balancers or api gateway manually, but this does hold your hand through the setup. I did find it very clunky that you needed to hank crank the proxy to get it to work but I imagine this would be streamlined on the release.

Below is a diagram that illustrates the pattern, it’s fairly straight forward, add a façade layer in front of your application so you can start carving off routes to microservices.


The migration hub has loads of features to build migration plan and action it.

Amplify studio

Amplify studio is a visual development environment to simplify front end development, you can pull in basic components and define backend APIs and models in a simple but limited way. You can define your own components and functionality within and without the framework, it does seem to provide a lot of the boiler plate and a nice framework to get started with. As a terrible front end developer, I would be interested to try create something and compare the experience.

Private 5g

Probably not the most relevant to web development but a very cool idea. 5g is extremely fast often faster than fibre in some areas. With private 5g you can create a private network without the need of routers and extension points all over the place. A very useful prospect for campuses and large offices.

read more

Codegarden 2019

Posted by Ryan Coates
codegarden logo
I love my job! I got to go over to Denmark for Codegarden 2019 and got paid for the privilege! We flew out of Newcastle to London then pretty much back over Newcastle to Copenhagen. We then had to get a midnight train to Odense, well multiple trains as there was some work being done on the line. We arrived at the hotel at about 3am, exhausted by the travel but ready for an exciting few days at Codegarden.

Codegarden is like no other conference, we were greeted with high fives and entered a very friendly, happy and tech savvy community.

The conference was fantastic, a diverse set of talks, some very technical some very soft, there was loads of scope to get together with the community and share ideas. I got a huge amount out of the experience and thought it would be useful to document my key learnings in this blog.

There was a big focus on Umbraco v8 (we are on v7.6!), v8 being the new shiny version of Umbraco. The main focus of the release has been to tidy the codebase, anyone who has worked on the older codebases can see the massive value in that. They have reduced the number of projects and lines of code and use solid modern architecture principals lacking in previous releases, like dependency injection and having clear patterns used consistently.

They also added a few new features the only one of real note is Variants, essentially this allows you to have multiple languages for content, useful for an international business!

There was a big focus on extending Umbraco using packages, one of the really interesting packages is PreFlight, this lets you set content rules which are validated before content is published. You can define content rules which is massively powerful and a very interesting idea.

One thing that was very apparent from looking at all the amazing work the community is doing is that we don’t do open source very well. We have made some pretty amazing extensions to Umbraco that we could share with the community we could also benefit from all the amazing plugins and improvements that are being developed. Umbraco is really key to what we deliver, it accelerates our content delivery and has already made an incredible impact, we need to get more active with the community to gain even more benefits.

As a brand-new team lead I wanted to take advantage of some of the talks that had a bit of a softer focus, one very interesting talk was about teaching in technology. This is something I think we all struggle with, sharing knowledge is hard and creating a culture that is always striving to learn new things is a big challenge. A key takeaway was how we talk to each other, we often comment on how smart/clever/brilliant somebody is when they do something well this makes it harder for them to take risks least they fail and fall off the pedestal. Commenting on their effort and complimenting their hard work has a much better result. If we don’t challenge ourselves and take risks, we stagnate and don’t develop. The other takeaway was to be aware of your mindset, recognise fixed mindset behaviours and try to move towards a growth mindset. I’ll not copy and paste all these behaviours, I will however copy and paste a link.

Dotnet core was a big topic of the conference too, the Umbraco Core team are working hard on making the transition with minimal disruption. I hosted an open space session with the team to discuss their approach, we too are thinking about how we can move to Dotnet core and it was really reassuring to hear that they are using very similar approaches to what we are planning. We challenged each other’s ideas and I think we all came away with a better understanding of the road ahead of us.

Getting out of the office and listening to talks and sharing ideas is a great way to challenge yourself and think beyond the day to day development work that fills your week. It refreshes you, gives you great ideas and is a catalyst for innovation. We don’t need to go all the way to conferences to do this though, if we regularly step away from work and think and collaborate it will kickstart a flow of creativity and innovation. I know there are many of us that do this and benefit from it but many of us feel like we can’t step away from our work which is a cultural issue we can improve.

Another massive benefit was having a few beers with my colleagues in other teams, getting to know them and finding out about their challenges and joys within their teams was fun and useful.

read more

Continuous Disaster Recovery

Posted by Ryan Coates
Recently I’ve been fortunate enough to be working on a pretty widespread project. This has forced me to touch on many different technical aspects of the business to deliver and as a result could change the way we deliver in the future.

Recently I’ve been fortunate enough to be working on a pretty widespread project. This has forced me to touch on many different technical aspects of the business to deliver and as a result could change the way we deliver in the future.

It’s also allowed me to coin the CDR (Continuous Disaster Recovery) term, which is what I would like to talk about a bit.

I’m not going to bore you with all the technical details of the project but I think it’s worth giving a bit of a summary for the sake of context.

The project is (can projects ever really be in the past?) to deliver a usable Tombola CMS (Content management system) for all our international countries. Luckily for us other teams had been working tirelessly in making a robust and feature complete CMS server, so all we would need to do is drop it on a server and “poof” content managed? No, not at all.

We made some decisions early on that would need to be implemented to deliver the project.

  1. We would script our infrastructure using terraform
  2. We would ship our environment with our code using amis
  3. We would centralise application configuration

While these three design goals complicated the project to no end, and probably should have been isolated to separate projects, they did unlock some pretty cool options for deployments.

After loads of work we had scripted the infrastructure, were baking images and had all our configuration defined in the appropriate environment, all we needed now was a way to deploy and synchronize our CMS state between environments.

Our CMS wouldn’t really run like a traditional code pipeline, normally a developer/author makes changes then it is passed to various staging environments where it is tested and approved before going to live. The CMS would need to run differently, authors would make changes on the live site, have someone preview their changes and publish it. It would be instantly live and what this would mean is Live becomes our single source of truth which other environments would synchronize with.

Our CMS uses two places to store it’s state, s3 buckets are used to store images and media while a relational database is used to store the object representations of the pages.

So all we would need to do is restore the database from the source and copy all the s3 content across too, easy right? Well yes, actually it is.

#this script is used to take an existing snapshot for the cms instance, delete the existing db and restore it with the snapshot, it then adds dependencies missing from the snap shot
#finally the s3 buckets are synchronized

echo "deleting database"
delete_command="aws rds delete-db-instance --db-instance-identifier $db_instance --skip-final-snapshot"
eval $delete_command
echo "restoring from snapshot"
aws rds restore-db-instance-from-db-snapshot --db-instance-identifier $db_instance --db-snapshot-identifier $snapshot_name --db-subnet-group-name mysql_subnet_group
echo "waiting for db instance to become available"
wait_command="aws rds wait db-instance-available --db-instance-identifier $db_instance"
eval $wait_command
echo "adding security groups"
aws rds modify-db-instance --db-instance-identifier $db_instance --vpc-security-group-ids $vpc_security_group

#s3 bucket will need cross account sharing for this to work
echo "sync s3 buckets"
aws s3 sync $source_bucket $target_bucket
echo "All done!"

A simple bash script makes this all happen but there are a few caveats.

This assumes that a snapshot exists and has been shared with the environment you wish, also the snapshot needs to be copied which you can do like this:

aws rds copy-db-snapshot --source-db-snapshot-identifier shared-snapshot --target-db-snapshot-identifier local-snapshot

So you should have routine tasks, taking snapshots, sharing them and copying them.

You also need to make sure the script that runs this has delegated privileges to the source s3 bucket, like so:

"Sid": "umbracoRestoreS3SharedAccount",
"Principal": {"AWS": "accno"},
"Action": [
"Effect": "Allow",
"Resource": "arn:aws:s3:::source-bucket/*"

So now I can bring up a CMS stack with all of it’s dependencies, minimal security privileges and with a single script synchronize it with the live instance.

So why does this matter?

  • Well we can bring a fully synchronized environment up in minutes which means developers and authors can play around an experiment with the confidence that they can never break dev.
  • We can disable unneeded environments
  • We could even use this approach to facilitate testing, it could even support testing automation.
  • Finally it forces us to constantly test out and improve our DR strategy

It’s still early days for us working with this but the current trends in shipping environments with code, scripting infrastructure and cloud computing is giving us some fantastic opportunities which we will definitely be taking advantage of.

read more