Competitive sitting and leaving loudly

Leave a comment
Growth

Oh, well, this person isn’t working very hard, are they?

No, you first

Does your office suffer from a competitive sitting problem?

If you’ve not heard of the phrase before, unfortunately it doesn’t have anything to do with a game of musical chairs, or rolling across the office and jousting each other with broom handles.

It’s likely that you may have experienced it yourself.

Have you ever been sat next to your manager, or your team, and have felt anxiety about going home first at the end of the day, based on what they might think of you if you were the first to leave?

Likewise, have you felt that bubbling stress when traveling into work, making you walk that extra bit faster, so that you can be the first person in your team to arrive? Have you felt that being there first signals that you work harder than everyone else?

That is competitive sitting.

It is a cultural phenomenon where employees believe they are being monitored for how long they spend at their desk, regardless of their output. It can create a workplace where an employee may not go home until their manager goes home, who in turn may not leave until their own manager leaves. Everyone feels like they need to put in more hours because they think that others are putting in more hours.

Clearly this isn’t about the actual work that an individual is doing, but more about the appearance of doing said work.

In culture

The subject of competitive sitting was brought to my attention by an Instagram post by Anna Whitehouse (via my colleague Toby), who campaigns for better treatment of those that happen to be parents.

Check out her post.

Please leave loudly. Tell people, “I’m just off to pick up baby Ian”. (Noone called their baby Ian last year so trying to revive it). Tell people, “I’m heading off for a beer-and-a-burger down Wetherspoons”. Or simply tell people, “I’m heading home now”. Today I wrote a feature for @telegraph about being human at work in a bid to stop ‘competitive sitting’ – seeing who can remain strapped to their slab of MDF the longest. We’re not talking about a monologue on baby Ian’s weaning journey or an insight into nit comb research. But why hide our lives from work? When did it become OK for a woman to sit at her desk miscarrying for fear of telling anyone because they might know she is ‘trying’? When did life become something to sweep under the carpet with hole punch offcuts and rogue staples? When is that good for business? Every day, Robbert Rietbroek asks his executive team to “leave loudly”. For the chief executive of @pepsi Australia and New Zealand, it’s about sending a message to the entire company. "'Leaders Leaving Loudly’ is something we created to ensure that when team leaders leave, they feel comfortable doing so but also to declare it to the broader team,” he said. "So for instance, if I go at 4pm to pick up my daughters, I will make sure I tell the people around me, ‘I’m going to pick up my children.’ Because if it’s okay for the boss, then it’s okay for middle management and new hires.” Rietbroek said the goal is to reduce “presenteeism” and boost team morale because if you are “younger or more junior, you need to be able to see your leaders go home, to be comfortable to leave”. Since joining @pepsi, the father of two has been championing family-friendly, flexible work policies, as well as attempting to boost the number of women in senior management roles — currently at about 40 per cent. But he wants to challenge the perception that flexible working arrangements are “off limits” for men and non-parents. The head of procurement is an avid surfer, and is given flexibility to take time off when the surf conditions are good. "His entire team knows when he’s not in the office he’s catching waves 🌊.” #flexappeal

A post shared by Anna Whitehouse (@mother_pukka) on

The post references a workplace initiative from Pepsi Australia and New Zealand called “Leaders Leaving Loudly”. When team leaders go home early, they are encouraged to publicly announce why they are doing so. Perhaps they need to go and pick up their children. Perhaps their partner is sick and needs some help at home. Perhaps they’ve worked really hard this week and need a couple of extra hours to decompress. But, regardless of the reason, they announce it openly to their team before leaving the office.

The idea, as explained by CEO Robbert Rietbroek, is to reduce “presenteeism”, because employees that are younger or more junior will see their leaders going home for work-life balance reasons and will therefore also feel comfortable to leave when they need to. Leaving loudly drives a culture of working smartly and efficiently whilst being mindful of other commitments in our lives. It says that we aren’t primarily in the office to simply clock in and clock out, we’re here to get meaningful and impactful work done and then go get on with other equally important things.

Over the years, there have been a number of articles on the subject of presenteeism in Japanese culture. In fact, Japanese firms have been under scrutiny for the expectation of long hours as opposed efficient work. An unwritten code of how many hours one should be visibly sitting at their desk can create a culture of sleeping in the office, sinecure activities, and an unhealthy, stressed and unproductive workforce. At worst, it can lead to serious harm and death. There is the Japanese term karoshi, which means death from overwork. There are a shockingly high number of reports of this occurring.

Work is a large, meaningful part of life, but the best work is done when suitably rested and refreshed. And, like art and food, the greatest satisfaction is gained through contrast with other opposites: the rich diversity of the other facets of life. A day filled with productive work, fun family time, a walk, some exercise and a good night’s sleep is far more satisfying than 18 hours of unproductive graft due to tiredness, a silent takeaway dinner where both parties are on their phones while eating, and falling asleep on the sofa.

How can we move towards a healthy balance?

Virtue signalling

The oft-used term virtue signaling means the conspicuous expression of moral values. In recent years, its use has become loaded with negative connotation and has been used to highlight certain behavior on social media, where people partake in particular activities in order to appear virtuous. Such activity can be seen in changing a profile picture to support a cause, retweeting content from charities or politicians, and offering “thoughts and prayers” for any world crisis or notable figure passing away. Changing your profile picture is not the same as actually raising money for a charity.

Competitive sitting is a form of virtue signaling. It says “Look at me! I’m still here and I’m here until the end! I am the best employee!” But – and I don’t know about you – I’d much rather work with people who come in, get stuff done, then go and spend time with their friends and family so that they come back the next day refreshed and ready to do it all over again.

You don’t want employees sitting around in the office until 7:30PM – despite doing nothing particularly of use – to be congratulated as examples of people that are working hard. Instead, we need to ensure that people are held in high regard for their actual output, rather than their perceived output.

Setting an example

We need to consider the behavior and examples that we set to others in the company. This is why it is important for managers and leaders to make sure that they are sending the right message through their actions.

Think about your team, or your company. What is the culture like with regards to working hours and presence in the office? Do you think that your actions match your expectations for your staff, or is that something for them and not you?

Are you expecting your team to do 60 hour weeks, yet you regularly go home at 4PM? What message does that send? Conversely, do you want for your team to feel like they have the flexibility to come and go as they choose as long as they are getting their work done, but regardless, you’re there at your desk from 7:30AM to 7:30PM every day? How do you think that might make them feel, knowing that their manager is always there before them in the morning and after they go home at night?

Check whether your own actions align with the workplace that you wish to see around you. Ensure you are acting congruently with what you want to see. It’s unfair to expect the same from others otherwise.

How do data science projects work?

Leave a comment
Growth

A primer for managers, stakeholders and those that are interested

No, not that kind of science. But sort of.

According to the LinkedIn 2017 U.S. Emerging Jobs Report, data science roles on the social network have grown over 650% since 2012. The same report notes that there are 9.8 times more machine learning engineers working today than there were 5 years ago.

But, given that data science is a nascent field, do we really know how to run projects in a way that allows data scientists themselves to have the space required to experiment and for management and stakeholders to be satisfied with their progress? After all, despite 43 years passing since the original publication of The Mythical Man Month, and 17 years since the Agile Manifesto was coined in a ski lodge in Utah, you could sometimes argue that we’ve never learned how to deliver software!

Given the inevitability that data science projects will become ever more part of the software industry as a whole, and that more managers will be held accountable for them, and that more stakeholders will be expected to follow along and give feedback, we should all understand how these projects progress.

Here’s what I’ve learned.

But first, to set the scene, and because it’s fascinating, let’s have a very gentle introduction to deep learning, which is one of many techniques used in the field of data science. It uncovers some of the intricacies of doing data science projects.

Scaling deep learning

A number of weeks ago, I had my attention drawn to an excellent research paper via Adrian Colyer’s Morning Paper newsletter. The paper is an exploration into understanding how we can improve the state of the art in deep learning, and whether we can make progress more predictable.

Without going into a lot of technical detail, deep learning – the new, fashionable term for building neural networks with hidden layers to solve problems – can tend to be more of an art than a science. We don’t know a huge amount about exactly why they work so well for particular problems, and years of designing them hasn’t yielded bulletproof design principles or patterns. Clearly, this isn’t promising for the predictability of projects. After all, the industry always wants launch dates!

If you aren’t familiar with neural networks, then they can be described as a mathematical model inspired by how neurons work in the human brain. Each neuron takes an input, and depending on some condition, gives an output. Neural networks are software representations of chains of neurons. Deep learning – specifically the “deep” part – means that there multiple layers of neurons. More specifically, deep learning means that there are “hidden” layers of neurons: ones that exist in the chain between the input and output layer. These deep learning networks have been used successfully for making computers do “intelligent” tasks such as voice recognition and image recognition.

Building neural networks in software isn’t like regular programming, where a human writes out the specific instructions of what happens when a user clicks a button, or assembles exact queries to a database to retrieve data. Instead, you specify how the network looks: the number of neurons and the number of layers and how they all connect to one another, which data inputs make them activate, their individual activation functions and the algorithm used for training them. That’s called defining the architecture of the network. Some networks have had millions of neurons and billions of connections between them.

Once you have an architecture designed, creating a deep learning classifier requires training. For example, if you wanted to use a neural network to determine if pictures had a cat in them, you would expose the network to lots – often many thousands – of pictures of cats. Special algorithms tune the neurons to understand the commonalities in the cat images, meaning that when exposed to a picture of a new cat, it can classify it as such.

Creating one of these classifiers roughly unfolds in these stages:

  1. Defining the architecture of the network.
  2. Evaluating it with a small about of data to see whether it works.
  3. Iteratively training and adjusting the network based on the amount of error seen.
  4. Stopping training when the network performs well enough, or when it isn’t improving any more.

The paper contained the following diagram to show how increasing the amount of training data (x-axis) reduces the amount of error in a given deep learning model (y-axis).

Deep learning with little data is rarely better than random. But after finding a suitable architecture, throwing more data at it makes it improve, to a point.

The small data region involves prototyping approaches on small amounts of data, where often results can be no better than random. Once a design yields some results, more data is gathered and used to train the model to see whether it truly fits the problem; this is represented by the orange circle annotation. If it is improving, then as much data is fed into the network as possible while it improves (the power-law region). We continue until it cannot get any better (the irreducible error region).

If this all sounds a bit strange, then consider the similarities with doing a start-up: typically the founders keep iterating on MVPs and pivoting until the product-market fit is found and customers start signing up, and then decide to stick, invest and scale it into a larger business.

I hope that gives some idea of how deep learning projects work. As it turns out, there are many similarities in the steps taken in most data science projects. As a manager or a stakeholder, you’ll need to understand when projects are moving through these phases to have a greater appreciation for the work going on.

Managing data science projects

If you’re used to running or observing software teams who aren’t doing data science, then you’ll notice that the above sequence of steps sounds fairly peculiar and unpredictable when compared writing your typical CRUD applications, and you’d be right! If your business expects the same kind of predictability and time commitments for data science that they do for their regular software projects, then there may be some uncomfortable conversations required: they work very differently.

The approach to training deep learning networks above can be generalized to represent data science projects more broadly, regardless of the exact technique, whether it be NLP, regression techniques or statistics.
Data science projects iterates through multiple phases, many of which can result in failure, and many which require financial decisions for the business such as whether to invest in more training data or more compute power.

If you want data science success, then it’s incredibly important that managers and stakeholders understand how projects typically work. This ensures there is a mutual understanding and appreciation of risks and progress, and gives data scientists the trust and space they need to experiment.

Common iterative areas are as follows:

  1. Defining the problem: Typically there is a business problem to solve that needs formulating into a data science problem. Is the problem actually solvable? What data are available and what features do we need to model? How do we know that we have succeeded? Which trade-offs, notably in precision and recall are acceptable to the user?
  2. Finding a model: Assuming we think we can solve the problem, what sort of techniques may be likely to work? How can we prototype them and get some initial results that let us be confident enough to proceed?
  3. Training the model: How and where do we get more data from?
  4. Application to real data: Now that our model is trained and is giving acceptable results, how do we apply it to real data to prove that it works?
  5. Production: Now that we have a successful model, how does it get moved into production and by whom?
  6. Maintenance: Will this model degrade over time and, if so, how much? Does it need retraining at regular intervals in the future?

Steps 1-4 may result in failure and a decision to go back a number of steps, or even to not continue with the project. These steps are scientific processes. In steps 5-6, we should have a model that we would be confident delivering, whether that is to a customer, or building it into software. This makes them engineering processes requiring a different set of skills, and often staff.

1. Defining the problem

Contrary to belief, this can be the riskiest, most difficult and time consuming part of the entire process. Business problems aren’t often easily translatable into a scientific problem that can be immediately worked on. Stakeholders will be wanting to deliver a feature to users, or they may be wanting to gain an insight into some data. But the team will have to start with first principles and build the problem from the ground up.

Firstly, is the problem actually solvable? It may be the case that it isn’t – i.e. it’s intractable by definition, or requires data that don’t exist. It may be the case that a model could be built, but the insight is so subjective that many people will think that it is incorrect. If the data exist, is it clear which parts of those data are required for modelling, and what the features are?

Most importantly, how will the team know when they have succeeded? Is there a clear right answer that will allow them to prove that the model works? Or will it require lots of user testing, and if so, with who? Additionally, what trade-offs are acceptable? Does it need to work most of the time, or all of the time?

2. Finding a model

Given that there is a clear definition of a problem, such as whether it’s possible to predict whether users are about to leave your website, or whether you can classify images of credit cards, the project begins with finding a model. These stages are typically run in a time box, depending on the size and difficulty of the problem. Therefore the first question that the business has to answer is how long are we willing to spend seeing if this might be possible?

Secondly, in order to find a model, you’ll need data. Specifically, two types of data: training data and test data. You may already have the data you need. In the first example project above, it may be a case of taking a sample of user logs from your website. However, you might not have any data at all: in the case of the second example project, where are you going to get images of credit cards from?

So the next questions to ask are can we get any data to test with? This is then followed by can we annotate the data easily? Going back to pictures of cats, if a dataset doesn’t already exist, humans are going to have to look at those pictures and say whether they are a cat or not. If so, you will probably want to crowdsource the annotation using platforms like Mechanical Turk or Figure Eight. This costs time and money and requires a clear definition of the questions to be asked. Sometimes it is simple such as “is this a cat?” Sometimes is highly subjective, such as “does this text convey disappointment?”

Assuming the business is happy with the time and money investment, then this phase will run until a suitable model is found, or until the time runs out and we admit failure. This phase of the project will typically have staff running small experiments, and spending a few hundred dollars on data. The end of the time box is a great opportunity for a demo in front of stakeholders, regardless of whether there has been a success or not: there are always learnings to share.

3. Training the model

If the first phase has been a success, it’s time to train the model. Like the first phase, this centers around time and money. In broad terms, the more training data that are available, the better the model will get. You will need to discuss, again, where to get the data, how to get them annotated, and how much you are willing to spend on that task. It will typically be much more than in the first phase – maybe even thousands of dollars if you need human annotation to be done.

Additionally, depending on the scale of the problem, training the model may require more compute power than is available on local machines. For non-trivial training tasks you will want to utilize fast machines, often from a cloud provider. Even using spot instances can be pricey depending on the task, so upfront estimates are essential to avoid an expensive surprise.

Assuming the budget is acceptable, you’ll also want to have a conversation about when we’ll know the model is performing acceptably. Is there a precision and recall goal we are aiming for? Or are we going to commit a certain amount of time and money and then reassess? Given the variables above, this phase will, hopefully, produce a model that works well against test data. Demonstrating it against real data in the problem space is a great way to conclude this phase.

4. Application of the model to real data

Precision is the accuracy of your model on the data that it has classified. Recall is the amount of data that it has correctly classified out of all possible correct classifications. (Interested parties might find it fun to read a thorough definition.) There will always be errors in any model, but tuning it to balance precision and recall can greatly improve the model’s effectiveness: high recall typically lowers precision, which means your users may see more errors. Is that acceptable? It could be for classifiers diagnosing illness where you’d rather be safe than sorry, but it could be extremely irritating for users of text analysis software.

At this point, you’ll have a model that seems to work well at the task at hand. Before giving it the green light towards production, you’ll want to test it on real data, as precision and recall figures alone cannot be trusted to determine whether the model is of acceptable quality.

A good approach is to have the team produce multiple versions of the same model with different parameter tunings representing different precision/recall balances. These can then be applied to real data and shown to your stakeholders for their feedback. This can help them understand the implications and make an informed choice as to which models and tunings acceptably solve their problem.

5. Production

If your team has made it to this phase, then it’s looking very likely that you’ll be getting that feature delivered. But they’re not done yet. Unless you’re extremely lucky, your data scientists will not be experts at production engineering, so it’s at this point that they’ll partner up with other engineers to move the project forward.

Key considerations here include how to technically integrate the classifier into the production system, how to store the models, and the speed of classification which determines how the code around the models will need to be architected: perhaps it needs wrapping in an API because many parts of the system will need to use it. Maybe due to data volumes and speed of classification it will require tens or hundreds of instances.

Work at this point becomes easier to estimate. It fits more naturally into how your feature teams deliver their work. You can start giving more concrete deadlines for the feature becoming available, and assuming all goes well, you’ll be able to ship it.

Awesome. But is the project done? Not exactly…

6. Maintenance

Like regular software, models that you build will also need maintenance with time. Depending on the type of data you are processing, especially if it is topical data such as social media feeds, then inputs will change: consider how widespread the use of emojis are today compared with five years ago. Consider the names of popular video games now compared to last year. Models won’t know about these if they are abandoned once shipped.

If the input data does evolve and change rapidly over time, then the team will need to revisit it in the future to analyse it against new data. Documentation on how the team built, chose, and trained the model is essential as inevitably everyone will have forgotten in the future when it’s time to check how it is performing.

Whereas maintenance of the production system is the duty the production engineers, the maintenance of the models is the duty of the data scientists. You’ll need to find an ongoing balance of creation of new models and maintenance of the ones that you already have in production.

Cycle complete

And that’s it: the rough life cycle of a data science project. In many ways, they are harder to manage than traditional software projects.

Getting something into production involves much more chance of failure, many more unknowns, financial implications for data and compute, and cross-collaboration between disciplines. And given that people understand less about how this sort of work is done, and the hype in the industry about the promise of AI to solve all of our worldly problems, managing expectations is even more challenging. Not to mention the ethical implications of data science as a whole, but that’s for another article…

. . .

Thank you to my colleagues Alastair Lockie, Dan Chalmers, Hamish Morgan, Óskar Holm and Paul Siegel who gave invaluable input as I put together this article.

Why I couldn’t write a manager README

comments 4
Growth

This isn’t working…

OK, so I tried. I really did try to write one. Multiple times.

But for some reason, they all came out trite.

Whenever I had a spare moment during the long Bank Holiday weekend, I would open up my laptop and begin to make notes on the definition of my role, on what I expect from those that report into me, how I like to communicate, the intricacies of my personality, and so on.

Each time I completed a further round of edits on it, I would step away from the computer. Upon returning later on, I’d reopen what I wrote previously, but I was always disappointed with what I read.

Although the purpose of a manager README is to help a relationship between a manager and their direct report get off on the right foot, I was unsure as to whether I was even getting the right impression of myself.

A thought repeatedly entered my mind.

“Who on Earth is this person that needs to write a document to tell me how to work with them? Is this really me?”

Maybe my view was not the opinion of the majority, as the idea of manager READMEs has gained some traction. But, regardless of how others may feel, here’s my personal take on it.

Some background

In recent months, a number of articles have been shared about manager READMEs, predominantly by those that work in software development. The document is written by a manager, typically for a new direct report, and outlines subjects such as the following:

  • The manager’s role and responsibilities
  • How they like to work and communicate
  • Expectations that the manager has from their direct reports
  • Their management style and default mode of operation
  • Any personality quirks they may have

Some managers would share their README document before their new member of staff arrived for their first day at work; consider it homework, if you will. Some shared it before their first 1 to 1 during induction week. Some had it publicly available online.

It’s a neat idea that stirred my curiosity. Should I have one of these? Are my staff putting up with some of my personality traits because I’ve never made them explicitly clear? Have I had any clashes, unbeknownst to myself, with my staff because I hadn’t clearly outlined our relationship?

. . .

Now, if you’re interested in reading some of these documents for yourself, then there are plenty available for you to digest. Hacker Noon published a collection of READMEs from some prominent engineering managers, and Katie Womersley of Buffer wrote about how to use them and shared her own. There are also many more examples dotted around the web, typically findable by typing “manager README” into your favorite search engine.

But, let’s return to my struggles when writing my own.

It’s worth mentioning that I never suffered from not knowing what to write. Over years of managing teams I have become comfortable and confident with how I work and I’ve had a decent amount of feedback that others are happy with this as well; both from those that report to me and those that I have reported into.

I was able to specify quite precisely what I felt the core aspects of my role were, what I expected from myself and from those that work with me, and I was able to state my communication preferences quite succinctly. I didn’t even think I had that many quirks, on reflection.

However, each time I read it, something didn’t sit right. I couldn’t quite work it out at first. Was it that the content wasn’t truthful? Was it that I was describing an idealized version of myself that I would feel wasn’t entirely truthful if I published it?

Not exactly.

Observations from others

After some thought, I took myself back to the comments sections of the articles that I had originally read, and also spent some additional time searching on Twitter and the Web for what others had been saying about the concept.

A few observations stood out.

The first was by Margaret Heffernan in reply to the original article on Hacker Noon:

Wow, these are all so similar it’s uncanny. I started wondering if actually they were all written by the same person. Everyone wants to be excellent. Everyone is here for you, not themselves. Everyone wants to be responsive but not intrusive. Did they all read the same management book? Sadly what these don’t show up are the differences that make people interesting.

This was congruent with the feeling that I experienced reading my own words: they felt like the words of others.

Every sentence that I was able to commit to (digital) paper felt like plagiarism of something that I’d already read elsewhere. Is management done well, in technology or otherwise, effectively the same core principles that we’ve all read from umpteen management books and learned from our own mentors? Have we reached technology management homogeneity?

Then, I uncovered a further observation.

Whilst reading through related conversation on Twitter, I came across an opinion stating that the concept itself could be flawed.

As Robert MacCloy points out in the same thread, surely, shouldn’t the burden of adaptability be on the manager? After all, isn’t it the duty of the manager to facilitate their direct reports, and in doing so, the manager should not expect their staff to change how they work to suit them? Do we not all possess different relationships depending on who we are?

. . .

I’d hit writer’s block.

At this point, I’d closed my laptop and leaned it up against the sofa. I’d jumped into the car to head into Lewes with Rebecca on an unexpectedly warm spring Sunday. I was sitting in the passenger seat of our car while she was driving down the A27.

As we passed the contour line between the Amex Stadium and the University of Sussex campus, I was trying to articulate why I had found writing my manager README difficult, and how that was now at odds with my original idea for this article.

I began: “You know, it just didn’t feel right producing a page of A4 detailing traits that somebody should know about me. They’re the sort of things that I typically discuss with my staff anyway.”

“Right,” she replied.

“Also, after reading a number of other people’s, they’re all really similar. Am I really just like everyone else?”

“Well, maybe you’re quite comfortable with how you work, but you’ve been doing it for a while. The process of writing it down could be a useful exercise to a new manager who hasn’t spent the time thinking about how they operate – it doesn’t have to be published and shared.”

A very good point.

“Not everyone is the same as you, either. There may be other managers who feel comfortable writing it down and communicating it in that way rather than going through it all in person.”

Also true. Discussing personality traits can still be an awkward conversation for many.

I replied: “Yeah. I’d much rather talk about this in person rather than have to document it all. That’s just how I work, I guess.”

And so it was.

Cmd-A, Del

When we got home at the end of the afternoon, I opened up my laptop again. It seems that my own manager README ended up being really short:

I’m here to help you succeed. Let’s talk about anything on your mind at any time.

And look: it’s even on Github.

I prefer to go through how we should best work together face to face, as we begin to get to know one another. You’re still welcome to write your own one for me to read if you like. It’s totally up to you.

Your mileage may vary.