An Overview of DevOps

Introduction

DevOps is a recent concept that has been becoming more and more widely adopted in business with each passing year. DevOps is a phrase which comes from the combination of ‘development’ and ‘operations’. The main idea behind DevOps is that development and IT operations engineers should be working together throughout the entire service lifecycle; from planning to product support. DevOps can also be characterized by development and operations using the same techniques for their systems work, such as version control or testing.

DevOps is largely related to the Agile methodology for software development. DevOps can be seen as an extension of the agile methodology to cover not just the development process, but the entire service delivery process.

 

Origins

Although software development as an industry is relatively new when compared to industries that have been around for a lot longer (manufacturing, agriculture, construction), there has been an unprecedented amount of churn in the proposed methods used to develop and maintain enterprise systems. In decades past, these systems were simplistic enough that they did not require a well-defined process to be built. As development teams grew and systems became more complex, many methods were developed. These include Waterfall Method, Incremental and Iterative, Agile, Scrum, DevOps and many more. [1]

DevOps inherits from both the Agile Systems Administration (ASA) movement and the Enterprise Systems Management (ESM) movement. ESM came about in the early 2000’s thanks to John Willis and Mark Hinkle getting together and first asking the question of how to improve operations methodologies. The Velocity Conference which was held by O’Reilly in 2009 had within it some important presentations which described development/operations collaborations. These collaborations promoted safe, rapid change in web environments. [1]

At around the same time as this was taking place, the Agile methodology was starting to become more widely adopted in the development space. Some forward-thinkers began to discuss ASA, which focuses on taking ideas from Kanban and lean manufacturing processes and bringing them into the context of IT systems administration. [1]

In 2009, Patrick Debois and Andrew Shafer first met and began talking about DevOps (after they coined the term). They held the first DepOpsDay event in Ghent. The ball was well and truly rolling at this stage. DevOps’ success came about due to a number of combined forces; a growing automation and tool chain approach, more monitoring and provisioning tools, the need for agile processes and dev/ops collaboration. These came together to provide the DevOps movement with its principles, processes and practises that we see today. More recently, some thought-leaders in the field have expanded their definition of DevOps to also include the Lean principles. [5]

 

DevOps in Detail

DevOps’ main goal is to bring together the development and maintenance teams so that operations employees are capable of doing development work when necessary and vice versa. This is actually a somewhat radical idea considering the large amount of compartmentalisation of responsibilities that is present in large enterprise system development projects. This lack of communication and integration between different teams working on one project is a source of a lot of problems. [2]

When people are only responsible for a small portion of a large project, they naturally tend to feel less responsibility for the project as a whole. As Maxime wrote in a blog post titled “Why I Quit my Dream Job at Ubisoft”, big teams lead to specialization. [3] And when people specialize they develop a kind of ‘tunnel-vision’, meaning that they view their area of expertise as the most important, which greatly complicates decision making. Using the example of an AAA game development team, if you’re responsible for the design of a lamppost in the game, you’re not going to feel a whole lot of responsibility for the game as a whole. DevOps aims to address this lack of responsibility by integrating and sharing work between team members.

 

Values

The values of the DevOps methodology are essentially the same as those outlined in The Agile Manifesto:

  • Individuals and interactions over processes and tools.
  • Working software over comprehensive documentation.
  • Customer collaboration over contract negotiation.
  • Responding to change over following a plan. [4]

 

Principles

According to John Willis, the principles of DevOps can be boiled down to 4 main concepts.

Culture this means that people and process have to be prioritised. Without a solid culture, attempts to collaborate and automate will be fruitless endeavours.

Automation this follows an understanding of the culture. When each member knows and understands each other’s strengths and weaknesses, decisions that support the given team can be made. The ‘fabric’ of DevOps can be weaved by selecting suitable tools for release management, configuration management, systems integration, monitoring and control.

Measurement without measuring performance over time, improvement is unlikely. A successful DevOps implementation will measure everything it can, from process metrics to people metrics.

Sharing – acting as a loopback, this principle completes the CAMS cycle. When development and operations come together to discuss problems, they view the problem as the enemy as opposed to playing the blame game between departments.

 

 

Methods

A lot of the methods used in DevOps are an extension of those that can be used in the agile methodology. Methods such as Scrum with operations or Kanban with operations can be used, although there will be a greater emphasis on integrating ops with dev, QA and the product.

In keeping with the principle of automation, there should be automatic builds, versioning, testing, configuration management, deployment, middleware, monitoring, ticketing systems and provisioning.

 

Patterns

Scripted environments – by fully automating environment provisioning, the risk of encountering environment specific deployment errors is greatly reduced. Using scripted environments also verifies the integrity of a particular version of the software in target environments. Infrastructure automation tools such as Puppet support this pattern through the use of manifest scripts which are deployed (like application code) to version-control repositories.
There are numerous benefits to following this pattern; environments are always in a known state, they enable self-service environments and deployments, they lessen the chance that deployments behave differently based on unique environment types, environments are part of a single path to production, they lessen the chance that knowledge is maintained only in team members’ heads and most importantly deployments are more predictable and repeatable.

Test driven infrastructures – a premise of DevOps is the application of patterns from development to operations and vice versa. The test-driven approach to writing tests before writing functional code comes from software development but also lends itself to infrastructure automation. As tools for infrastructure automation become more popular, engineers are beginning to apply test-driven practices to their infrastructure. As with environment provisioning, infrastructure testing can be done with scripts. One such tool is Cucumber, in which tests are described as scenarios and handle in a behaviour driven manner (when I do X, I should see Y).
A benefit of this pattern is that problems manifest earlier as infrastructure changes are integrated with the rest of the software system using continuous integration. Also worth noting is that the tests and scripted infrastructure become the documentation.

Chaos Monkey – the Chaos Monkey is a continuous testing tool that was developed by the Netflix tech team. The tool intentionally and randomly terminates processes in the Netflix production infrastructure to ensure that systems continue to function in the event of failure. By constantly testing their infrastructure’s ability to succeed despite failure, they’re preparing for the inevitable unexpected outages. By following a principle of “everything fails, all the time”, they are prepared for the worst. [7]

Version everything – it’s still rare to find a team that version all of the artefacts required to create the software system. The purpose of versioning everything is to determine a single source of truth (a canonical version). Software should be treated like a holistic unit. When everything is versioned, nobody is unclear or navigating a mess of versions of the software system. A new team member joining the team on a new system should be able to perform a single-command checkout and be left with a complete working software system from it.

Delivery/deployment pipeline – this is a process in which different types of jobs are run based on the success of the previous job. Using a continuous integration server (such as Jenkins), various stages can be configured including a commit stage, an acceptance stage etc.
The visibility provided by a deployment pipeline ensure that all aspects of the delivery system including building, deployment, testing and releasing are visible to every member of the team. Through fully automating the process, it’s possible to deploy any version of the software to any environment automatically. [6]

 

Collaboration

Collaboration is fundamental to DevOps. Traditional development and operations teams often work in silos and have limited inter-team communication until software release time. As bad as it sounds, it’s almost an expectation that most software does not meet release deadlines and a lack of collaboration can be at least partially to blame for this. There are a number of ways to increase collaboration and breakdown the traditional barriers that prevent software from being delivered regularly.

Collective ownership – this practise dates back to the extreme programming (XP) methodology and is also associated with agile methodologies. In the context of continuous delivery, emphasis is on ensuring that all types of source files that make up the system are available for any authorized team member to modify. If everything is scripted and everything can be modified then these source files should include application code, configuration, data and even infrastructure.

Cross-functional teams – this means having teams consisting of representatives from all relevant disciplines. As opposed to treating each discipline as a separate centralized service organization, the delivery team becomes the primary organizational structure. The anti-pattern of this is siloed teams who have their own scripts and processes. Populating a delivery team with business analysts, customer representatives, DBAs, developers, project managers, and QA and release engineers greatly reduces the ‘it’s not my job’ syndrome which plagues so many organizations not adopting a DevOps or similar methodology.

Polyskilled engineers – these are team members who are skilled in all areas of the software delivery process. In general, team members should be capable of performing their specialized task firstly but also capable of carrying out the duties of other aspects of the delivery process too. This includes project managers writing tests, developers modifying database code, DBAs writing functional tests etc. Although not always being a jack of all trades, being polyskilled prevents the need to rely on a few key individuals to get software released.

 

Tools

Infrastructure automation: Bcfg2, CFEngine, Chef, CloudFormation, IBM Tivoli, Puppet.
Deployment automation: Capistrano, ControlTier, Func, Glu, RunDeck.
Infrastructure as a Service: Amazon Web Services, CloudStack, IBM SmartCloud, OpenStack, Rackspace.
Build automation: Ant, Maven, Rake, Gradle.
Test automation: JUnit, Selenium, Cucumber, easyb.
Version control: Subversion, Git, IBM Rational ClearCase.
Continuous Integration: Jenkins, CruiseControl, IBM Rational BuildForge.

As you can see from the above illustrative (and very incomplete) list, there are a whole plethora of tools to choose from when trying to automate operations in all stages of the delivery process. The tools you choose for your project will vary based on the requirements of the project however each tool should be able to run from the command line. This is to enable the pipe to run in headless mode or with just one click (running bash scripts).

Observation & Critique

In an article by Jeff Knupp entitled ‘How DevOps is Killing the Developer’, he makes the point that DevOps movement and its reliance on cross-functional profiles (jack of all trades) is better suited to startups and not Big Enterprise. [8]

Knupp writes that the scarcity of resources in a startup environment warrants the jack of all trades profile, as in a team of 7, it makes sense that the DBA can write functional code too. He argues that developers are busy enough dealing with problems within the realm of development to be doing other ‘easier’ work.

I think he has missed the point a little. DevOps is not solely about hiring people who are polyskilled; its primary goal is to improve software delivery through culture, automation, measurement and feedback (sharing). Having polyskilled team members only benefits by enabling members to share work when there’s a backlog/approaching release/absent team members. It prevents the whole project from being delayed due to a change in team dynamic.

It’s important to note that DevOps is a set of principles that should be viewed as guidelines. Each implementation of DevOps will be different to the next, depending on the software being developed, the skillset of the team members, the timeframe for development and the culture of the team. To go back to Knupp’s article, it would appear to me that his experience of DevOps was that of a poor implementation, possibly hardly following the principles. It often happens with the Buzz Word of the YearTM that it gets thrown around in a superficial manner by people who don’t understand the meaning at all.

 

Conclusion

In order to fully appreciate the potential that could be gained by adopting a DevOps methodology, one must dig deep into the story behind the term. Gaining an understanding of where the term came from and what problems it was trying to solve is essential reading for anyone who is considering a DevOps implementation.

The manual method of managing operations simply became out-dated and those that were left to deal with the frustration of moving slowly in a rapidly changing world looked to other fields for inspiration. DevOps came about in an attempt to bring the principles of the Agile software development methodology into the realm of IT operations.

The principles of the DevOps methodology are culture, automation, measurement and sharing. Growing the culture of your team means each team member knows their team mates. They know each other’s strengths and weaknesses; this helps them to work together more effectively and intelligently. If a culture of ‘all in this together’ is nurtured, team members will seek to improve processes at every stage of the service delivery system. Measurement (and sharing of measurements on dashboards!) enables the improvement in performance of operations.

As discussed above, there are numerous design patterns associated with the DevOps culture. These range from automated environment provisioning to version control systems. There are also countless tools out there to aid in implementation of these patterns. The tools you choose to use are entirely project/team specific.

A good way of understanding DevOps is as follows: Agile aims to improve communication and collaboration between developers and their clients, DevOps aims to improve communication and collaboration between developers and operations teams.

 

References

 

 

How to ‘Love Yourself First’

I’ve heard the expression thrown around so many times, I’ve used it so many times, that I think I ought to write in further detail about how one goes about ‘loving themselves first’ rather than just regurgitating the cliché time and time again.

Okay, here we go.

1. Look After Yourself

That sounds dead fucking simple but let’s be honest here, most things that sound simple are, in and of themselves, simplistic in nature. It’s in the implementation that people tend to fall down. In this case, I can describe what looking after yourself means. I can list ways that other people manage to look after themselves. I can link to studies showing that certain activities are proven to improve mental well-being. This would all be something that you’ve read before and if just reading about this shit made it all better, well you wouldn’t still be here reading more. It’s only through real intent and action following that intent that anyone ever makes a change in their life.

There’s no one remedy for every individual (life would be boring if our minds could be fixed so easily), but there is a framework we can all use. It exists in our brain and it’s called habit forming.

Chances are that if you are feeling shitty about who you are as a person, it’s not the underlying consciousness that you think is a piece of shit, it’s actually the person you habitually become every day when you wake up. It’s the way you think, the things you do (or the things you don’t do) that you are unhappy with.

The good news is that these things are all changeable. Make a list of the things you do, say or think that make you unhappy with yourself.

Seriously, go do it now. I’ll wait.

2. Change Your Habits

Then the next logical step is to correct your bad habits. Replace the old with the new. If the person you were unhappy with was a bit of a slob who spent every evening staring at a screen from the comfort of a couch, well then you know that the person that will make you happy must spend their evenings in another way.

This could be you, if you're into that sort of thing.
This could be you, if you’re into that sort of thing.

You can’t just give up a habit and not replace it with something else. Choose something that you love or even that you think you might enjoy and decide to practice that activity instead. Habits are literally etched into the neural pathways in our brain. This is why they’re so hard to break once they become automatic behaviours. To develop a new habit, you have to write that behaviour into your brain as a new neural pathway¹. It’s not quite brain surgery but you can see why it’s so difficult.

This relates to goal setting as well, but it’s important to aim for a lifestyle change rather than a physical change. For example, aim to get some exercise twice a week rather than aiming to lose a stone. Goals with deadlines are great when the work involved has a real deadline but when you’re looking to change your own habits, there is no benefit in setting yourself a finish line.

I find that deadlines in this context only serve to demotivate me in the case that I miss them or allow me to take the foot off the gas when I’ve surpassed them. This is the old habit fighting back, trying to get the mind to slip back into the comfort of the habit it knows. A personal example would be my efforts to refrain from smoking cannabis, “a whole week without smoking? Time to celebrate with a spliff”. You see how dangerous this can be?

Okay I’ve gone on quite a bit about habits but I hope you see the relevance. We don’t love ourselves because of how we feel. We feel how we do as a result of our experiences. We experience what we do because of our habits. Hence, changing our habits will change our experience, in turn changing our feelings towards ourselves.

3. Practice Mindfulness

“Boo! Go away with your bleedin’ mindfulness shite will ya!?”

Sorry but this has to be mentioned. If you do not try to be mindful, you will fail. In order to rewire the brain you have to actively monitor it (be mindful). If you do not monitor your behaviour (be it thoughts or actions), the mind will slip back into its default mode and before you know it, you’ll be back to your old habits once again.

This is where practising mindfulness comes in handy. I’ve spoken in depth about mindfulness before but the underlying concept is simply that of awareness of the present moment. Within the observation of the present moment lies the key to self-love. When you hear your inner voice giving you grief about something, instead of believing it, accept it as a thought (and only that) and then let it go.

Don’t fight against negative thoughts. I’ve read so much bullshit about only paying attention to positives and then you will only feel positive. It’s bullshit. Completely and utterly. Accept the negative and the positive. Resisting thoughts doesn’t solve our problems, it actually creates more because in doing so you not only have the negative emotion to deal with but also the feeling of tension that stems from resistance.

Allow yourself to feel sad. Do not however, dwell on negative thoughts by humouring them. When we focus our attention on something, we give it our energy. We give it power to sustain. That’s okay if it’s thinking about how happy we’ll be when we’re on holidays, but why would we ever want to fuel a negative thought about our body image with more fire?

The main point I wanted to get at here was mindfulness with regard to maintaining habits. The optimal technique for this (in my opinion) is starting a morning meditation ritual. That doesn’t mean setting aside an extra hour every morning, it can be done in 10 minutes or less. You can download an app for guided meditation or if you feel comfortable doing so, just sit for a few minutes in relative silence and focus on your breath. Think about your intent for the day while you meditate.

Another great piece of advice I’ve taken on board in my own self-improvement is to start the day by writing down what I intend to accomplish throughout the day². Putting it onto paper makes it that bit more real. It’s harder to dismiss something real as opposed to ignoring a thought alone. If you do this over an extended period of time and keep it in some sort of organised format, you’ll be able to look back over all you’ve accomplished and feel pretty great about it. Which brings me on to my next point.

4. Reflection

And I don’t mean looking at yourself in the mirror. I mean to think carefully and deeply about your progress. As humans we are blessed with the ability to self-reflect. Meta-cognition is one hell of a drug. We have the capacity to exercise introspection and a willingness to learn more about our fundamental nature, purpose and essence (straight from Wikipedia!).

So what I’m saying here is to put that ability to good use. Reflect critically on your progress and also on your failed attempts at progress. This is how we learn what works for us and what doesn’t. It also turns failed attempts into something of value. When you try something out and discover that you don’t like it, you learn a lot about yourself. You know where not to waste time in the future. It’s important to be aware of our own shortcomings so that we can either work on them or try something different.

Think of reflection as a way of fine tuning our technique for self-improvement to suit our specific needs. You’ll make far more progress with a technique that’s specific to yourself than you will a general approach. If you don’t reflect, you won’t know why it’s not working.

Conclusion

Loving yourself is not an easy task for somebody who has developed some bad habits and allowed their mind to build absolute 4 lane motorways of neural pathways for these habits. By focusing on the fact that all we’ve done is build roads in our head, it becomes easier to know that you can tear them down and build new ones.

Rome wasn’t built in a day, and neither was an emotionally stable, self-loving, content mind.

 

Footnotes

  1. Understanding the physical process taking place in the brain can be helpful in changing habits. Focusing on the fact that we’re fighting against the current state of the mind as opposed to combating the mind itself motivates us. It reaffirms the idea that this is a battle we can win.
  2. I use a journal (or diary for those that wish to slag me over it), and follow the structure outlined here to track my progress. There are many online tools that provide the same functionality as paper except you know, on a screen.

Rich Reviews: Recondite @ Opium Rooms, 26th June 2015

Recondite at Opium

RECONDITE (Lorenz Brunner by birth) hails from South Bavaria in the Southern corner of Germany. Growing up in a small village by the name of Berg (German for mountain) which was surrounded by dense forest, Lorenz learned to appreciate nature from a very young age. This understanding and awareness of the inter-connectivity of all things has played a big part in Recondite’s rise to success in the underground dance scene, as separate as those two may seem. With releases on revered labels like Innervisions, Life and Death, Dystopian and even Hotflush Recordings, there is no doubt that Recondite has become a serious player. And looking at the frequency of his releases, he has no intention of stopping any time soon.

Recondite hadn’t visited Dublin to perform in almost a year (since the last time Abstract brought him to Opium Rooms), so this set was highly anticipated. There’s no such thing as half measures with this man. His live sets are an extension of his identity, so to say we were getting a glimpse into his soul would be a very fair statement to make. This is even more so the case, given that everything he plays is his own work.

On the night, local techno producer, Will Kinsella, warmed up the crowd for the main man. Will has been getting recognized internationally recently, with support from the likes of Richie Hawtin. As excitement spread throughout the venue, Will took full advantage and really had fun with the crowd. He kept the music under control perfectly, keeping things lighthearted enough yet still energetic and spirited. He finished his set with a wonderful piece of elevator techno that encapsulated the crowd in a bubble of improvised saxophone and excited anticipation on a backdrop of deep and dark percussion.

Recondite took to the stage at around 1am. He began tamely, aware of the crowd’s energy level, not wanting to jump the gun. Of all the artists I’ve seen perform live, none can empathize with their crowd quite like him. There is a certain hint of perfection in his minimalist approach to music. Listening to Recondite tracks through headphones is one thing, but to hear him play live is an entirely different experience full of opulent bass sounds and exquisitely constructed melodies that cut deep into the heart of the listener.

His live set went from calmly melancholy to aggressive and intense. Every piece of music was deserving of its place in the mix. It’s essential to pack a mix with a couple of pallet cleansers and Recondite knows this all too well. Like most, he produces his music with ‘club playability’ in mind although unlike most, his sets consist solely of his own work. The meticulous attention to detail can be heard in each track. It was quite special to watch him take them apart and rebuild them as one hour long emotional journey, the highlight of which was hearing this monster!

As an artist who works in solitude, one might not expect much of a stage presence from him. I guess there’s some truth to that. I’d rather say that his stage presence was calm and focused. The performance wasn’t about Recondite the man, it was all about the music. You don’t get the arms in the air soaking in the glory of it all, you get a man determined to deliver to a very high standard. That, Recondite is very good at. So good in fact, that he was voted the best live act of 2014 by Resident Advisor users.

“I can’t take it if I don’t go to the woods for a period of longer than three weeks. It’s the air and everything, the birds, I don’t know, it’s very important for me.”

Lorenz spends a lot of time in solitude wandering through forests. There is a whole world of natural sounds out there waiting to be discovered and he is well aware of this. The uniqueness of his work can be (at least partially) attributed to the fact that he records so many field recordings. The sound of a branch thudding off a fallen tree trunk can become the perfect kick drum, the rustle of weather-hardened leaves can play the role of a high-hat, the rattling of light pebbles being moved down a stream of water can be transformed into a snare drum (with enough experience) . Recondite uses these real world sounds and finds the melody inherent in nature to create his music. It may seem counter-intuitive that a techno producer finds his muse in nature, you might imagine an abandoned warehouse with lots of ‘metal bits’ to clang together would suit better. In actual fact, Lorenz feels discomfort in urban environments and almost needs to escape back to nature every now and again. In an interview with Resident Advisor  he explained that he can’t really connect to “the whole city vibe”.

I imagine a message exists within the truth of Recondite’s existence. That message is one of love, connection and balance. The emotions that engulf us when we return to the natural world are trying to tell us something. Recondite gives us a glimpse into this beauty in his music. I hope for all of our sake that we don’t destroy his source of inspiration to make Ikea tables before we get another opportunity to see him live in Dublin.

Rich Reviews: Mano Le Tough @ Opium Rooms, 3rd May 2015

manolt

 

logo

With the recent growth of the underground dance scene, it’s not unusual to find yourself spoilt for choice when looking to go out in Dublin City. That’s even more so the case on a bank holiday Sunday. Mano Le Tough was going up against Karenn (Blawan/Pariah), Chris Leibing and Jeff Mills in the bid to draw a crowd and ultimately throw a great party. While that’s no small task, Mano’s name attracts with it a certain amount of respect and appreciation from the Irish crowd and this was evident from the turnout on the night.

 

Opium Rooms, which, in a past life was a club called The Village known for chart music, drink deals and first year college students, has been revamped and rebranded with the intention of playing host to world renowned underground DJs. The likes of Detroit Swindle, Phil Hartnoll, Skream and Groove Armada have already made appearances. Definitely a cool venue, suited to the style of music Mano is so well known for playing. The main room’s high ceiling and relatively narrow width allows for an all-encompassing soundscape that really captivates its audience and this night was no different in that regard.  The sheer anticipation of Mano taking to the stage had the crowd in a whirlwind of excitement as Lil’ Dave (the label manager of Maeve Records, a label run by Mano Le Tough, The Drifter and Baikal) wrapped up his energetic yet tame warm up set. It was clear from his tune selection that this man knew how to warm up for Mano.

Mano started as expected with a fantastic melodic breakdown, giving his crowd a moment to gather themselves in preparation for the incoming waves of percussion and low frequency vibrations. It’s during these lulls that Mano’s passion for his craft really shines through. It’s during these breaks that we get a glimpse into his deep love for beautiful sounds. The way in which he fuses heartfelt vocals, synthesized melodies and hard hitting drums always guarantees an emotional set.

To give you a taste of the rich quality of Mano’s tune selection, I’ve included this absolutely wonderful piece by Floating Points, ‘Nuits Sonores’.

Focused and in control, Mano brought us on a three hour journey that oscillated between tranquillity and euphoria. Full of long blends and perfectly timed EQ control, Mano’s mixing was just as flawless as his tune selection. It’s important to appreciate when a DJ can back up his production with technical skill behind a set of decks and Mano is no stranger to a pair of CDJs. Although he grew up in Dublin, it wasn’t until he moved to Berlin in 2009 that he got heavily involved in the scene. It was very clear to see that he learned his craft in probably the best city in the world to do so. There was a great sense of the influence of the Panorama Bar, in which Mano held his album launch in 2013, from the way in which he shaped his set.

It’s also very important for me to be able to see that a DJ is enjoying himself while he works and without doubt, Mano had fun. At times it felt as though the crowd were his puppets and he was our puppet master, gently tugging on our strings, giving us a moment to catch our breath, then ripping hard on them with melodies that cut through the entire mix. Aware of his crowd control, he’d take a step back to appreciate the crowd for their devotion to him with a couple of over the head claps before returning to his deck duties. Not once did he let a mix go out of control, a light handed flick of the jog-wheel, a disciplined touch of the pitch fader and then focus would shift back to the mixer with full confidence that the two tracks were perfectly aligned with one another.

As the night went on, the melodic breaks gave way to progressively heavier tracks and an increase in tempo. By 3am, the energy in the room temporarily ‘lifted the smoking ban’ and Mano’s people took full advantage of the opportunity to get a nicotine fix without missing a single beat of the magical journey they were witnessing. Nothing could accurately describe the euphoria as the night rolled on towards 4am. Mano toyed with his audience, leaving nothing but the highest frequency sounds in the mix, the crowd begging for more before he finally gave it to us.

To wrap up, Mano played a (presumably unreleased) remix of Moloko’s Sing It Back which used the original track’s vocal elements infused with dark dubby sounds and heavy percussion. To hear the crowd sing it back to him and to catch him grinning cheekily towards Lil’ Dave as they did so, was a perfect moment.

Kristian Beyer of Âme once said “Thanks to Mano Le Tough, I have no fears for the future of house music” and after witnessing him turn Opium Rooms into a haven for heartfelt, emotional music I can safely say I feel the very same.

 

Web Scraping with VBA

My first hands on experience with web scraping was one of uncertainty and a significant amount of ‘on-the-job’ learning. Initially I was working as a tech support agent but once the operations manager caught wind of the fact that I’m a programmer, I was moved to the offline team and tasked with writing some sort of script that would scrape a relatively large amount of data from one of the company’s sites and store it in a spreadsheet for easier analysis.

I knew that this was definitely possible to do, it was just a matter of finding out how. I was given a week to determine if I was up to the task. I started where any sane person would, the Google search bar.

The first potential candidate I found was a Chrome plugin called webscraper.io. The examples they include on their site are very helpful and things were looking hopeful after a few hours of testing. The issue that led to this idea being scrapped was that it is only designed to scrape data from a website and has no functionality that would allow it to populate search fields and submit a form before commencing a scraping session. Although this tool didn’t suit my specific requirements, it could be of great help to someone looking to scrape an online store of all its products, for example.

Digging deeper and deeper through pages of Stack Overflow, I kept coming to the same conclusion; that my best bet of automatically navigating through webpages, populating search fields and trawling through pages of results, documenting the contents as I went, was to use Visual Basic for Applications (VBA).

I had no prior experience with VBA but there’s nothing particularly interesting about it or its syntax that caused me any trouble. I simply added references to ‘Microsoft Internet Controls’ and ‘Microsoft HTML Object Library’ and I had everything I needed within Excel and its IDE for VBA. The following code shows how to create an Internet Explorer object, set it to visible and navigate to a URL.


Set IE = New InternetExplorerMedium
IE.Visible = True
IE.navigate ""

Okay so now that we’ve covered how to get to the site primed for scraping, we need to figure out how to interact with elements within the page. What better way to showcase this than to attempt to login. The first thing to consider is how to address a particular element within a webpage. To do this we get the Document Object Model (DOM) and from there we can drill down to identify specific elements. Depending on the website in question and the prerogative of the developer(s) of said website, the complexity of this varies. The following code snippet works given the assumption that each element has been given a unique id.. sometimes this isn’t the case but I’ll talk about that in a moment. First, the easy way:


Dim doc: Set doc = IE.document

Dim passwordTextField = doc.getElementById("password")
passwordTextField.Value = "hunter2"

Dim submitBtn: Set submitBtn = doc.getElementById("submit")
submitBtn.Click

Looking at the below image, it’s clear to see how elements in a webpage are related. Each nested element is considered a child of its parent, meaning we can access any element by traversing through this tree structure.

An example of a DOM structure
Example DOM Structure

Let’s imagine that the case is the same as above but the submit button has not been given an id. The first place you ought to look is to the parent of that element. Let’s say that element is a

with an id of “submitBtnDiv”. In this instance we can still access the submit button a number of ways but it’s a little trickier:


' We can get the first element in the set of children elements
Dim submitBtn: set submitBtn = IE.document.getElementById("submitBtnDiv").Children(0)

If we can guarantee that the element will always be the nth instance of a given class, then we can do the following as well:


' If we know that the element in question is the first instance of an input tag
Dim submitBtn: set submitBtn = IE.document.getElementsByTagName("input")(0)

Note: using EI9 or later, you can also use the method getElementsByClassName().

Now that we can access any individual element within a webpage, writing a web-scraper just comes down to the individual site in question. For this example, let’s use IrishRail.ie. Quickly scanning through the source (using Google Chrome’s nifty inspect element feature), we can quickly determine how difficult it’s going to be to access the table of data we want.

html1

The first thing that jumps out at me here is that there’s a div with an id of ‘content’, that’s a starting point. The next thing I see is that the table tag is its 11th child. Already, we can access the table:


Dim table
' To get the 11th child element:
Set table = IE.document.getElementById("content").Children(10)

Now that we have the table, it’s just a matter of accessing each individual td and placing that value into the corresponding cell in your spreadsheet now. The following code shows a simple loop that will extract all data from the above table:


Dim trs, tds, tbody
tbody = table.getElementsByTagName("tbody")(0)
trs = tbody.getElementsByTagName("tr")

For r = 0 to trs.Length - 1
    tds = trs(r).getElementsByTagName("td")
    
    For c = - to tds.Length - 1
        ThisWorkbook.Sheets(1).Cells(r, c) = tds(c).Value
    Loop
Loop

In the above snippet, the getElementsByTagName method is used in two different ways. Firstly, we request the very first instance of tbody within the table tag. Secondly, we request a collection of elements of type tr. This gives us an ordered list of all tr tags nested within tbody. Looping through each row and then doing the same for each td, we can quickly grab all of the table’s contents without specifically referencing each individual piece of data.

And that’s basically all there is to it. There are other aspects one must think about when writing a web scraper that has to work reliably and consistently but this post should give you a basic understanding of the fundamental concepts used in web scraping.