tombola careers
tombola careers


Blog | Tombola Careers

tombola blog

Starting a new job during a global pandemic!

Posted by james conway
Starting a new job during a global pandemic!
Our new Platform Developer - Sam Johnson, gives us a flavour of his first few weeks in tombola and joining during a global pandemic!

Starting a new job is always daunting. Starting a new job during a global pandemic - that’s just a new level!

Not being able to meet any of my new team, not being in the office and not being able to sample that amazing food at lunch. I knew that starting at tombola was going to be a bit tricky.

The first few days were great to settle the nerves that I had. I was able to meet my new colleagues through teams and it really settled my nerves. Everybody seemed to go out of their way to get in touch and welcome me. My initial days were full of really useful and informative inductions about the history of tombola and how it has become the company it is today.

Something else that really blew me away was how seriously the company values and cares about the safety of the players and how it prioritizes the safety and wellbeing of its players before any profits. The safeplay induction was a great way to understand how the company promotes its players to play safely from the amount of tools that are available so the player can play safety and a whole team dedicated to safeplay.

Overall my first weeks here at tombola have been really great and despite everything that is going on, I already really feel part of the team. I’m looking forward to meeting everyone properly! 


read more

Sweden Session Alerts and Time Limits

Posted by Paul Waller
We launched our Sweden website recently and we introduced a few new features around responsible gambling

I would like to talk about the Sweden Session Alerts and Time Limits project we recently shipped from tombola International.

In preparation for the launch of tombola Sweden, we were presented with regulatory requirements that gave us the opportunity to take our responsible gambling tools to another level. We needed to allow players to limit the amount of time they could spend on our website (daily, weekly or monthly), and occasionally show them a message telling them how long they had been playing and how much money they had won and staked during the current play session.

This could be broken down in to a few tasks:

  • Track the logged in time of our players
  • Allow players to set a time limit
  • Enforce these limits; The player should be logged out when their limit had been reached and denied login to the site until the next day/week/month
  • Send and show alerts to players

We decided to develop this functionality standalone so that it wouldn’t directly depend on anything in our current .Net Framework driven website. This allowed us to use .Net Core, something that I had been excited to get stuck in to again since I had recently attended a workshop on building applications with .Net Core delivered by some amazing developers from Microsoft. This would also prove to be a very cost-effective solution.

Being an AWS customer, we were able to make use of a number of their products that helped us to have a working prototype up and running rather quickly. We built Docker images of our services so were able to deploy and run them in ECS (Elastic Container Service). We made use of SQS (Simple Queue Service) to communicate between services and used Parameter Store for our runtime configuration.

A huge benefit of developing in this way is that it allows us to offer this functionality to the websites of the other territories we operate in, without needing to modify code. We can simply deploy new instances of these Docker images and they will pull down their own country specific configuration from Parameter Store. They are stateless.

One thing that we were missing was a way to take the alerts that we were producing on the backend and deliver them to the player in real-time. We chose to use SignalR for this as it fit our needs perfectly and had a version supporting .Net Core.

Our Game team provided us with some great looking Vue.JS components to show these alerts in an eye-catching way. Here is one of them

Hourly Alert

I think we have provided a good, solid, portable solution. The addition of SignalR as a way to communicate with our players is a first for International Platform and opens up some exciting possibilities for our future developments.

read more

Codegarden 2019

Posted by Ryan Coates
codegarden logo
I love my job! I got to go over to Denmark for Codegarden 2019 and got paid for the privilege! We flew out of Newcastle to London then pretty much back over Newcastle to Copenhagen. We then had to get a midnight train to Odense, well multiple trains as there was some work being done on the line. We arrived at the hotel at about 3am, exhausted by the travel but ready for an exciting few days at Codegarden.

Codegarden is like no other conference, we were greeted with high fives and entered a very friendly, happy and tech savvy community.

The conference was fantastic, a diverse set of talks, some very technical some very soft, there was loads of scope to get together with the community and share ideas. I got a huge amount out of the experience and thought it would be useful to document my key learnings in this blog.

There was a big focus on Umbraco v8 (we are on v7.6!), v8 being the new shiny version of Umbraco. The main focus of the release has been to tidy the codebase, anyone who has worked on the older codebases can see the massive value in that. They have reduced the number of projects and lines of code and use solid modern architecture principals lacking in previous releases, like dependency injection and having clear patterns used consistently.

They also added a few new features the only one of real note is Variants, essentially this allows you to have multiple languages for content, useful for an international business!

There was a big focus on extending Umbraco using packages, one of the really interesting packages is PreFlight, this lets you set content rules which are validated before content is published. You can define content rules which is massively powerful and a very interesting idea.

One thing that was very apparent from looking at all the amazing work the community is doing is that we don’t do open source very well. We have made some pretty amazing extensions to Umbraco that we could share with the community we could also benefit from all the amazing plugins and improvements that are being developed. Umbraco is really key to what we deliver, it accelerates our content delivery and has already made an incredible impact, we need to get more active with the community to gain even more benefits.

As a brand-new team lead I wanted to take advantage of some of the talks that had a bit of a softer focus, one very interesting talk was about teaching in technology. This is something I think we all struggle with, sharing knowledge is hard and creating a culture that is always striving to learn new things is a big challenge. A key takeaway was how we talk to each other, we often comment on how smart/clever/brilliant somebody is when they do something well this makes it harder for them to take risks least they fail and fall off the pedestal. Commenting on their effort and complimenting their hard work has a much better result. If we don’t challenge ourselves and take risks, we stagnate and don’t develop. The other takeaway was to be aware of your mindset, recognise fixed mindset behaviours and try to move towards a growth mindset. I’ll not copy and paste all these behaviours, I will however copy and paste a link.

Dotnet core was a big topic of the conference too, the Umbraco Core team are working hard on making the transition with minimal disruption. I hosted an open space session with the team to discuss their approach, we too are thinking about how we can move to Dotnet core and it was really reassuring to hear that they are using very similar approaches to what we are planning. We challenged each other’s ideas and I think we all came away with a better understanding of the road ahead of us.

Getting out of the office and listening to talks and sharing ideas is a great way to challenge yourself and think beyond the day to day development work that fills your week. It refreshes you, gives you great ideas and is a catalyst for innovation. We don’t need to go all the way to conferences to do this though, if we regularly step away from work and think and collaborate it will kickstart a flow of creativity and innovation. I know there are many of us that do this and benefit from it but many of us feel like we can’t step away from our work which is a cultural issue we can improve.

Another massive benefit was having a few beers with my colleagues in other teams, getting to know them and finding out about their challenges and joys within their teams was fun and useful.

read more

tombola at NDC London

Posted by brian renwick
NDC London
NDC London 2019 was my first “big” conference representing tombola. It was a fantastic experience, and I would recommend attendance to anyone at tombola who is given the opportunity.

This year’s conference talks ran from 30th January until 1st February 2019; it included a large number of speakers, holding talks on a variety of topics that ranged from subjects that we, as developers, encounter each day, to the future direction of our field.


Opening Keynote

The opening keynote of the conference was presented by Hadi Hariri, a notable developer at JetBrains. Hadi talked about the ubiquity and availability of data in our connected world, and how we as service consumers, either willingly (and sometimes unknowingly) share our data. Such as, for example, through our daily interactions with social media, we provide a vast amount of data to service providers, who in turn may allow third parties to access that data to derive certain conclusions about us as individuals.

Many service providers offer their services for free, which on the face of it, it great for us as consumers. However, as Hadi highlighted in his talk, the “service” provided to consumers in this case is not the “product” being sold; it is in fact ourselves, or rather, our data, that is the “product” to be traded.

The keynote touched upon some Orwellian ideas around data misuse and the idea of our data being the product to be traded and sold. In the extreme, without full transparency and oversight of service providers, and individual control over data, our willingness to voluntarily share our information in exchange for “free” services could have the potential to result in a bleak future for our society. Hadi highlighted the social credit score that is currently being implemented by the Chinese government as an example of how data can be used by governments and organisations to control populations. Scary stuff!

The keynote was a fascinating opening talk for the conference, and certainly gave me plenty to think about.

Having listened to the talk for an hour or so, and having taken on-board the issues that were highlighted, almost all of the attendees (myself included) immediately decamped to the exposition hall to exchange their data for tons of “free” swag!

Day One

Two talks stood out most for me on the first day of the conference: the first was presented by Roy Derks titled “GraphQL will do to REST what JSON did to XML”; the second was a quick runthrough of CSS Grid, presented by Amy Kapernick.

Roy’s talk about GraphQL was of particular interest. I felt that his overview of the technology gave enough of an insight to envisage how GraphQL could have an immediate benefit to our API-driven project work at tombola.

GraphQL is a technology currently in use by a number of high-profile organisations, such as Facebook, GitHub, and Pinterest. The idea behind GraphQL is that it enables service consumers (e.g. client applications) to query a well defined data structure in order to retrieve only the data that is significant for that application.

In the current world of RESTful APIs, we define a service endpoints that return well defined data structures. Sometimes, however, a service must support multiple consumers: one consumer may require a certain subset of the data provided by the service, while another may require an entirely different subset. Over time the requirements of each respective consumer may change, requiring that the service expose more data than is necessary. This problem is typically solved by versioning the RESTful API, and is also the point at which problems may begin to occur. For example, the service codebase may become fragile or difficult to maintain, supporting multiple versions of the API over time may have ownership cost implications for the business, and deprecation of older versions of the API may cause difficulties across multiple services and applications.

GraphQL solves the RESTful API versioning problem by allowing the service to extend the data structure definition without breaking existing consumers. Moreover, since consumers only receive the data that is queried, they do not receive more data than is required for that application.

Amy’s talk about CSS Grid, while short, was interesting and seemed to be of immediate use and benefit for tombola front-ends.

Way back in the bad old days of building web front-ends, developers would employ the use of HTML tables to organise the layout of their pages, sometimes resulting in pages with tables nested within tables. Needless to say this would lead to an unmaintainable mess. Between then and now, things have progressed greatly; we’ve moved away from HTML tables for layout, to floating DIVs, to Flexbox, and now to CSS grid.

Flexbox is a great CSS tool that allows us to build flexible layouts that flow across the page, but can sometimes be cumbersome to use in order to create grid-style layouts. Amy’s talk showed how CSS Grid helps to solve this problem. CSS Tricks has a fantastic guide if you’re interested:

Day Two

My standout talk for day two of NDC London 2019 was about the future .NET SPAs using Microsoft Blazor by Steve Sanderson.

Blazor allows us run .NET Core client-side in the browser on top of WebAssembly. The pre-release changes to .NET Core allow assemblies that target the .NET standard library to be natively compiled to WebAssembly (instead of MSIL), which can then be served to the client browser and executed locally. Coupled with the recent development of Razor Components (similar to Razor views that we use in our ASP.NET MVC applications), means that we will soon have the ability to build client-side applications using C#! With all of the the obvious advantages of having a single codebase that this entails, such as the ability to share code between client and server.

After Steve’s talk, I felt really excited about the possibilities of building new client-side applications with C# and eventually moving away from the JS stack.

Daniel Roth‘s talk “Introducing Razor Components in ASP.NET Core 3.0” was a great complement to Steve’s talk.

The second day of NDC was rounded off by Guy Royse‘s WebAssembly talk. WebAssembly is a stack based byte-code language that closely mimics the behaviour or machine executable code, but does so in the sandboxed environment of the browser. It is a platform that allows developers to write applications in languages such as C/C++, Rust, and C#, and execute that code in the browser.

Guy gave a thorough introduction to WebAssembly, and started by presenting the background and origins of Assembly as a language more generally, before moving on to argue the case for a WebAssembly language in the browser. He began his argument by highlighting a bunch of the oddities that JavaScript, the language traditionally used to develop client-side web applications (see for a sample of JS oddities). Following on from this, he described the code pipeline of a JS application, paying particular attention to the overhead that the browser has in order to execute a JS application: i.e. downloading source files, parsing, and finally execution. With WebAssembly, developers are able to overcome many of JavaScript’s oddities by leveraging the consistency and reliability provided by higher order languages, but also, the browser is able to forgo the parsing phase of compiled WebAssembly resources and instead execute them directly (it’s worth noting that the performance of applications that target WebAssembly is almost on-par with natively executed machine code!)

The talk ended with Guy writing a native WebAssembly application live on stage – a task not for the faint-hearted because (i) it was a live demo! and (ii) because writing applications natively in WebAssembly is difficult!

Day Three

The third day and final day of NDC London 2019 was a bit of a mixed bag, however, the standout talk for me was given by Scott Hanselman. I’m certain that there are many developers at tombola who are familiar with Scott Hanselman, or who may have read his blog.

Scott’s talk was much less about Microsoft’s tech stack, and much more about a problem very personal to himself. Scott Hanselman has been a type-I diabetic for most of his adult life, and as such, he must continually monitor his blood sugar levels. His talk started by describing the difficulties of living with diabetes, and about the products and services that a number of pharma companies provide in order to collect and analyse blood sugar levels. Whilst the these products can easily collect and analyse data from users, it seems that the pharma companies are not so keen on allowing users access to their data. In response to this, Scott and others in the community have developed software and created hardware “hacks” in order to access and analyse their data, so that more informed and accurate decisions based upon their blood sugar levels can be made.

Data captured from Scot’s blood analyser is processed by an app on his smartphone, which in turn, passes it to a backend service through a web API. This same API is then able to push out notifications to other connected devices and applications. For example, Scot had developed an app for his smartwatch that alerts him whenever a dip in his blood sugar is detected. He’d even written a powershell plugin that shows his blood sugar level in his command prompt!

Whilst Scot’s talk primarily focused around the difficulties of living with type-I diabetes, it did highlight some really interesting and innovative technological solutions to the problem, and definitely gave plenty food for thought.


There were a lot of good talks at NDC London 2019 (and some not so good one’s too). There were a number of topics covered that I felt could have the potential to be useful at tombola for new and existing projects in the short term, such as GraphQL, and others that I felt could be useful in the future, such as Blazor.

read more

AWS meetup at tombola house

Posted by phil atkinson
aws meetup at tombola house
tombola hosted their first AWS NE meetup at our brand new tombola house building here in Sunderland.

Here’s the slides of my talk “tombola – a tale of config”

tombola hosted their first AWS NE meetup at our brand new tombola house building here in Sunderland.

Here’s the slides of my talk “tombola – a tale of config” 
(notes are attached to the slides if you download the presentation) :

Click to Download

read more

AWS Task Scheduler

Posted by sohaib maroof
Introducing AWS Task Scheduler

Before I begin

Let me confess, I am not a great writer of words, more of code. So blogging, in general, is quite a challenge for me.

Why did we make a tool?

As a part of our 2020 roadmap, we are trying to offload as much processing intensive work from our monolithic architecture as possible.

This is to make as much room as we can to reach our 100K active daily players.

A part of this means not creating any new windows scheduled tasks or SQL scheduled jobs unless absolutely necessary. A good alternative is to use AWS Lambda functions or Docker containers to run as tasks.

Currently, we schedule these tasks in terraform, which means the scheduling is in code. This is starting to become a concern as we don’t have a consolidated view of what tasks run and when. Also, at present, we don’t have a means of quickly disabling or pausing a task if it becomes problematic.

Picture this scenario: You have a task that does a very simple job; sends out emails to customers who are opted in, that task is scheduled, now imagine if you want to change the schedule, you would have to do a code change and push it to production. Now I imagine if you send out, let us say a monthly player statement instead of a weekly one, you would have to deploy a whole new task for it? Whereas the only difference would be one word and the change in the schedule. To achieve this without having to do code change we proposed a solution in the form of a tool that will give us the ability to quickly and easily create, disable, delete and change how tasks are scheduled without the need for any code changes. This frees up the developer to concentrate on developing the task at hand rather than maintaining the code that describes the schedule. These two concerns become decoupled.

Where do I fit in this picture?

We had this project on the back burner for quite a while, but unlike the arcade and bingo teams, as a customer service aligned developer, I deal with Windows scheduled tasks and SQL scheduled jobs day in day out, therefore this was a well-suited project for me to see it through to production. Although a customer service aligned developer might use this more than anyone, it is a tool intended to be used by all territories throughout the company.

What is the Task Scheduler good for?

The task scheduler is intended to allow developers to schedule their computational tasks and manage them with ease. There are several features we required of the scheduler:

Task scheduler lists all the scheduled tasks whether it’s ECS task, lambda function, SQS queue or an SNS topic. You can drill down to each schedule to see which tasks it executes. This gives a complete and comprehensive overview of all the schedules in one place. It as can create, edit, disable and delete any schedule. It can attach multiple tasks to a single schedule. It has two types of audits; schedule audit and user activity audit. This gives a complete and thorough view of all the changes made to the schedule. Shown below is the screenshot of the schedule. It shows all the necessary information about a schedule and the tasks it will execute.

 schedule overview

Technologies Involved

Authentication and authorization: The application is domain authenticated so that only specific groups of people have access to it. For this purpose, we used passport.js together with Active Directory strategy for passport.js, as it perfectly fit our requirement. Passport is an authentication middleware for Node.js – Extremely flexible and modular, Passport can be unobtrusively dropped into any Express-based web application.

Task Scheduler is bootstrapped with Create React App together with Materialize. React makes it painless to create a single page application updating and rendering just the right components when the data changes. Materialize on the other hand is a modern responsive front-end framework; both sit in perfectly to provide a robust client-side experience.

Hope you enjoyed reading!

read more

Two years at tombola

Posted by phil atkinson
2 years at tombola
I can’t believe I’ve been here at tombola for 2 years, time really does fly when you are having fun. If you aren’t having fun and enjoying the work you do then you are in the wrong job. So much has changed it’s hard to decide where to begin…

I can’t believe I’ve been here at tombola for 2 years, time really does fly when you are having fun. If you aren’t having fun and enjoying the work you do then you are in the wrong job.

So much has changed it’s hard to decide where to begin…

The people make a company so I’ll start with the people. I’ve joined such a great team of people at tombola, it’s made me realise what a great team is and what they can achieve when they truly work together. We work on projects together as a team, with a single goal.  We communicate with each other, we push each other to be better and more importantly we trust one another.  This has allowed us, as a team, to deliver project after project, allowing tombola to grow as a business.

From a developer point of view the technology exposure has been nothing short of amazing. Not long before I started here tombola had just “moved” into the cloud but there was still lots of things to improve upon. I’d never used cloud technologies before so it was a real eye opener for me, it was like just discovering to write code all over again.  Since then it has been a roller coaster of new technologies we have explored and used: Docker, ECS, Terraform, Lambda, Alexa to name just a few.  Training, certification and protected time to learn is all provided by tombola so my skills are always improving, as they say “every day is a learning day”.

The projects I’ve worked on have all been different and there is simply too many to mention in detail. Each week brings something different, which keeps things interesting, whether it be: building a content management system, designing a promotional giveaway, architecting a new cloud solution or building a new deployment pipeline. Each project offers its own challenges and can often introduce you to even more technologies.

Joining tombola has also allowed me to grow in other ways, to try things I’d never done in the past, such as writing blog posts :)
others include: contributing to open source projects, investigating cutting edge technologies, going to developer conferences, presenting new tech ideas to peers, taking part in a hackathon..
I’ve worked as a developer for over 20 years but I’ve never worked for the right company, until now.

What will the next 2 years bring? We still have a vast technology roadmap to work through and with AWS bringing out new services all the time, who knows? Tombola is growing all the time, as is my team, and with a new HQ building to move into, it’s certainly going to be interesting.

Here’s a few of my previous blogs:

article: alexa-skill-from-hackathon-to-production

article: tech-brown-bag-runtime-configuration-management


read more

Docker in Production – A Year (and a bit) Later

Posted by tombola ops
docker logo
It’s been around 2 years since tombola started to investigate the use and potential benefits of container technology, specifically Docker. At times since then progress has been slow and frustrating, but that’s no shock with new technologies.

We’ve been running containers in a production environment for just over a year now and I wanted to look back at how this move has affected the business.

It’s been around 2 years since tombola started to investigate the use and potential benefits of container technology, specifically Docker. At times since then progress has been slow and frustrating, but that’s no shock with new technologies.

We’ve been running containers in a production environment for just over a year now and I wanted to look back at how this move has affected the business.

N.B. This isn’t going to be overly technical, just a general review.

The team started by converting two API services to run in containers. The work not only required rewriting these services from .Net to NodeJS, but also to creat a new solution to build and deploy them. A combination of Team City (which we already used) and Terraform (more on that choice later) was what was settled upon.

To run and manage our containers we chose to use Amazon’s Elastic Container Service. This provides an orchestration tool to manage a cluster of instances and a private container image repository. At the time it was the easiest solution that met our needs.

So, why Terraform? Two words – Placement strategies. We needed to set placement strategies for the containers within the ECS cluster. These needed to be set upon deployment of the application. These strategies could be set through the AWS console, but not using Cloudformation at the time, which was what we originally wanted to use. However, Terraform did support this, and so the choice was pretty simple. Setting the placement strategies was an important part to maintaining a highly available service.

Since the migration of those 2 initial services the teams have added almost a dozen more. All of these services are deployed using the same pipeline mentioned earlier, which has simplified and sped up the process of getting new services live.

The move has simplified some of the server management in Operations. At least 8 of the services would previously have been built to run on their own stacks of 2-4 instances, leaving us with at least 16 instances running. Instead we have 3 in our main cluster, with room to spare. This saves us money, time and effort.

Since the services have gone live they have been rock solid. Even patching of the live cluster instances is non-disruptive.

Has it all been sunshine and rainbows? Not exactly. There have been technical challenges along the way, but nothing that hasn’t been overcome (eventually). You can’t really argue against the success of this work.

Where do we go from here? 

There’s always work to be done and there’s no exception here. Recently AWS released their managed Kubernetes service, which warrants investigating. Is it a better solution for us than ECS? We’ll have to find out.

Monitoring has always been a bit of a difficulty, but we will soon be implementing a new monitoring solution across all of our infrastructure that will handle container monitoring far better than our current solution.

Security is a constant battle and everyone using containers can step up here. Visiting Dockercon this year opened my eyes to just how much more work is required from a security point of view across the community. For example, seeing a major distro’s official docker image being exploited quickly was a little scary. Container security definitely presents a new set of challenges.

So there is plenty more work to be done and maybe next time I’ll be writing about a migration to Kubernetes, who knows?

read more

Continuous Disaster Recovery

Posted by Ryan Coates
Recently I’ve been fortunate enough to be working on a pretty widespread project. This has forced me to touch on many different technical aspects of the business to deliver and as a result could change the way we deliver in the future.

Recently I’ve been fortunate enough to be working on a pretty widespread project. This has forced me to touch on many different technical aspects of the business to deliver and as a result could change the way we deliver in the future.

It’s also allowed me to coin the CDR (Continuous Disaster Recovery) term, which is what I would like to talk about a bit.

I’m not going to bore you with all the technical details of the project but I think it’s worth giving a bit of a summary for the sake of context.

The project is (can projects ever really be in the past?) to deliver a usable Tombola CMS (Content management system) for all our international countries. Luckily for us other teams had been working tirelessly in making a robust and feature complete CMS server, so all we would need to do is drop it on a server and “poof” content managed? No, not at all.

We made some decisions early on that would need to be implemented to deliver the project.

  1. We would script our infrastructure using terraform
  2. We would ship our environment with our code using amis
  3. We would centralise application configuration

While these three design goals complicated the project to no end, and probably should have been isolated to separate projects, they did unlock some pretty cool options for deployments.

After loads of work we had scripted the infrastructure, were baking images and had all our configuration defined in the appropriate environment, all we needed now was a way to deploy and synchronize our CMS state between environments.

Our CMS wouldn’t really run like a traditional code pipeline, normally a developer/author makes changes then it is passed to various staging environments where it is tested and approved before going to live. The CMS would need to run differently, authors would make changes on the live site, have someone preview their changes and publish it. It would be instantly live and what this would mean is Live becomes our single source of truth which other environments would synchronize with.

Our CMS uses two places to store it’s state, s3 buckets are used to store images and media while a relational database is used to store the object representations of the pages.

So all we would need to do is restore the database from the source and copy all the s3 content across too, easy right? Well yes, actually it is.

#this script is used to take an existing snapshot for the cms instance, delete the existing db and restore it with the snapshot, it then adds dependencies missing from the snap shot
#finally the s3 buckets are synchronized

echo "deleting database"
delete_command="aws rds delete-db-instance --db-instance-identifier $db_instance --skip-final-snapshot"
eval $delete_command
echo "restoring from snapshot"
aws rds restore-db-instance-from-db-snapshot --db-instance-identifier $db_instance --db-snapshot-identifier $snapshot_name --db-subnet-group-name mysql_subnet_group
echo "waiting for db instance to become available"
wait_command="aws rds wait db-instance-available --db-instance-identifier $db_instance"
eval $wait_command
echo "adding security groups"
aws rds modify-db-instance --db-instance-identifier $db_instance --vpc-security-group-ids $vpc_security_group

#s3 bucket will need cross account sharing for this to work
echo "sync s3 buckets"
aws s3 sync $source_bucket $target_bucket
echo "All done!"

A simple bash script makes this all happen but there are a few caveats.

This assumes that a snapshot exists and has been shared with the environment you wish, also the snapshot needs to be copied which you can do like this:

aws rds copy-db-snapshot --source-db-snapshot-identifier shared-snapshot --target-db-snapshot-identifier local-snapshot

So you should have routine tasks, taking snapshots, sharing them and copying them.

You also need to make sure the script that runs this has delegated privileges to the source s3 bucket, like so:

"Sid": "umbracoRestoreS3SharedAccount",
"Principal": {"AWS": "accno"},
"Action": [
"Effect": "Allow",
"Resource": "arn:aws:s3:::source-bucket/*"

So now I can bring up a CMS stack with all of it’s dependencies, minimal security privileges and with a single script synchronize it with the live instance.

So why does this matter?

  • Well we can bring a fully synchronized environment up in minutes which means developers and authors can play around an experiment with the confidence that they can never break dev.
  • We can disable unneeded environments
  • We could even use this approach to facilitate testing, it could even support testing automation.
  • Finally it forces us to constantly test out and improve our DR strategy

It’s still early days for us working with this but the current trends in shipping environments with code, scripting infrastructure and cloud computing is giving us some fantastic opportunities which we will definitely be taking advantage of.

read more

Want organisational change? Tear down the fences

Posted by tombola ops
Tear down the fences
Any business larger than a start-up will be composed of groups of people, each performing a distinct job. These groups may be called teams, divisions, sections or task groups. The words don’t matter. The important thing is that they are pockets of human beings, gathered together to do a certain thing.


An American poet once wrote that ‘Good fences make good neighbours‘. In the corporate environment, the exact opposite is true. The boundaries we construct between teams actually damage relationships.

Us and Them

Any business larger than a start-up will be composed of groups of people, each performing a distinct job. These groups may be called teams, divisions, sections or task groups. The words don’t matter. The important thing is that they are pockets of human beings, gathered together to do a certain thing.

These groups are positive in many ways. They provide employees with support and a sense of belonging. They concentrate expertise and knowledge. They are the logical building blocks of corporate hierarchy. Unfortunately, they have a drawback. When you create a team, you build a fence. Everything inside that fence is us, everything outside is them.

War at the borders

Teams can find themselves in a state of cold war. The spaces between them are like hostile borders. The lines of communication are tenuous and strained. Why is this? More often than not, it is the inevitable result of misaligned goals. Team A wants something that Team B cannot provide without breaking rules. Division C needs something quickly that Division D has to make slowly. Section 1 needs approval from Section 2, who are understaffed and stressed. The groups are driven to conflict by circumstance and mismatched business needs. Inevitably, there are casualties. People get hurt. Grievances, written warnings, hushed departures. Employees who spend their careers secretly at war.

The root of this problem is understanding, or rather the lack of it. Relationships between team members are stronger than non-team members. This makes perfect sense. When you work in a group, you have more time to bond. You suffer through the same meetings. You swap personal histories. You construct a shared vision. You spend your time in the same building, sometimes the same room. Because of these shared experiences, you understand each other. Any minor disagreements are seen through the lens of your shared history. You assume goodwill. This is not true of non-group members, who might need (because of their role) to obstruct you from time to time, to get in way, to say No. When goals collide, the understanding isn’t there. This is the soil in which problems grow.

Possible solutions

Friction between groups is a difficult nut to crack. Tribalism seems to be baked into us. But you are certainly not powerless. You can alter your own attitudes, if you really want to. Here are three suggestions:

  1. Police yourself. Stop being negative about other teams, or members of other teams. No more jokes. No more type casting. No more gossip. Try to change the underlying patterns. You may find this difficult initially. It’s hard to put down old ideas.
  2. Proximity. Try shared projects, shadowing, secondments, anything that gets you in the same room as the other group. Try to break it down into individuals. It is harder to be annoyed at someone when you know the names of their kids.
  3. Compassion. When you really, really need something, and that person is holding you up againtry to step back from the brink. Take a breath. Perhaps that person is unwell, upset, run off their feet. We cannot know the hidden lives of others.


Improve inter-group relationships and you will improve the business. This is obvious, so obvious that it does not really deserve an article. And yet it often gets ignored. In the rush of deadlines, targets, goals and sprints, we can all fall into the trap of us and them. We can all participate in the grumbling narrative of office dispute. But you can make a difference. You can try to change. We can’t tear down all the fences just yet. But maybe we can make a few doors.

See also

The line of poetry in the introduction is taken from ‘Mending Wall’ by Robert Frost, published in the collection ‘North of Boston’ in 1914.

(Published on LinkedIn 03/07/2018)

read more