Tag Archives: michael bolton

CAST 2014

Back in August 2014 I was fortunate enough to attend CAST 2014.

CAST, which is organised by AST (Association of Software Testers), is an annual software testing conference bringing attendants and speakers from all over the world. This year’s edition was held in New York, at one of New York University’s buildings overlook 5th Avenue and Washington Square Park. This location was as convenient as it would get for me, considering I would be flying to another continent – it only takes 6 hours (give or take) to fly across the pond and land in New York.

IMG_2032

I arrived in New York on the Sunday, a day before the conference started, and arrived just a little bit too late in the afternoon to take part in the “Scavenger’s Hunt” that took place around the area.

Monday was a tutorial day and I took part in “The art and science of test heuristics” led by Fiona Charles, which I will be covering the main points below.

Monday evening I went down to the bar where a few attendants and speakers were gathering and played a few test games, the one I particularly took a liking for was SET. SET is a pattern recognition game – it’s brilliant to get your brain working again and one of my colleagues has even suggested playing it before the start of every meeting to make sure everyone was in gear and ready to work.

Tuesday and Wednesday were to be your standard conference days, with keynotes and talks given by speakers, and I surely made the most of it. Apart from the usual conversations I was also lucky enough to take part in a testing challenge on Tuesday evening where we were asked to test a Test Execution tool developed by one the challenge sponsors. I was “recruited” via twitter where several people were forming teams and ended up in a 8 people team. It was great to be able to pair up with someone that I never met before and I certainly got a few tips from him (Michael Corum @TNRidgeback). The results were announced the following morning and our test report was considered to be the best one alongside another team’s, but the others got the prize in the end, considering they had less members and produced a slightly better formatted document. Wednesday evening a few of us went out for a meal after the conference had finished, which was really good to meet even more people from the community and share experiences – it’s great to find out that other people have the same professional struggles as you even though they are separated by thousands of miles. It’s also good to know that you’re at least on the right path, and in my case actually quite fortunate to work somewhere as good as Red Gate. The evening was rounded off with a few drinks and yet more testing games, in particular the Pen game which myself and Michael Corum paired on, to try and solve the puzzle. We eventually got it, after quite a few attempts and some excellent coaching from Stephen Blower.

Overall this was an amazing experience, and the people I met were once again one of the highlights. But, of course listening to the likes of Michael Bolton, James Bach, Fiona Charles speak was really enlightening, and the amount of learning that took place over the 3 days was great. It was also great to see familiar faces like Pete Walen, Matt Heusser, Jean-Paul Varwijk and others I met at Agile Testing Days.

So below you can find my notes about the keynotes, talks and tutorial I attended. Hopefully, they are of some use to you.

“The art and science of test heuristics” Tutorial by Fiona Charles

  • Started off with a knot problem where each group (of 8 people approximately) would form a knot with their hands
    • it led groups to understand the problem, and in turn question the requirements
    • refactor our initial model for solving the problem
    • control the scope – we had too many people in the group for example
  • sometimes you have to abandon certain models;
  • children games are great to understand heuristics and models;
  • prepare to re-model
    • re-model -> embrace chaos
    • challenge the assumptions regularly
    • focus and de-focus
  • how would you go about this?
    • google it!
    • watch your initial assumptions
    • over think it!
    • previous experience
    • instructions can be useful
    • look at patterns
    • trial and error
    • multiple oracles
  • reverse strategy – start with easy first and vice-versa;
  • throw away work and re-do;
  • limit on permutations – find a way to cut through;
  • step away from problems and take a break;
  • switch technique;
  • diversity on a team can be good but also cause a problem;
  • being lucky – surely you must find your own luck;
  • problem could be bigger than you thought;
  • rule: use machines for what they are good at;
  • beer fridge problem (https://www.youtube.com/watch?v=t41wNkGvJ9k)
    • previous experience
    • tech limitations
    • personas (diversity/multiple people)
    • (not) follow instructions
    • act on feedback
    • varying input
    • curiosity
    • consider audience
    • visually appealing (objects)
    • continuous learning
    • context
  • you don’t stick quality in;
  • if you don’t pass one test criteria, don’t go into the next one;
  • tactful communication
  • follow the money
  • danger in following heuristics – change them?
  • art – creativity coming up with heuristics;
  • science – analysing/measuring those heuristics.

I really enjoyed the tutorial day. It was a simple start and it was great to hear what everyone thought heuristics meant in their own context. To me, heuristics are rules of thumb. Something that will help me and steer me in a certain direction to try and achieve a certain goal or prove something. In a way, this gives me the correct behaviour expected, but also allows to think what may or may not happen instead and lets me follow a variety of paths. The variety of games and exercises done throughout the day were great and Fiona had great input with all her knowledge on certain things that we may have been missing.

“Testing is not test cases (towards a performance culture)” by James Bach (keynote)

You can watch the keynote here and the slides are here. Below are some of the notes I scribbled down during the talk.

  • Analytical lag time – time between experiments and the results come up/are shown;
  • “Testing is not test cases”
  • use charters for testing – they will help you;
  • there’s no test for “it works”;
  • Pro Clip – James Bach tool to generate test data;
  • analytical testing is different to analysis of output;
  • every act of testing involves many layers (slide #3);
  • fresh-eyes-find-failure heuristic (slide #5);
  • importance of precise words as testers communicate;
  • highlight text if you’re not sure it’s a bug and you need clarification with developer or someone else;
  • “I don’t yet see a problem here” rather than say “It passes”;
  • last thing a tester should do is just follow instructions, it’s not intellectual enough;
  • a test case is a set of ideas, instructions, or data for testing some part of a product in some way;
  • test automation saying is toxic – testing can’t be automated, just like test cases don’t contain testing;
  • you don’t ask developers if they are a manual or an automated programmer, so why would you say it with testers;
  • managing tacit knowledge – real testing ability cannot be spoken, it’s tacit knowledge;
  • testing cannot be encoded (slide #12);
  • struggle through practice and examples, not certification;
  • checking is part of testing, just like tyres are part of a car – just not everything – you may not need much testing on your project; test exploration or design work can lead to valuable work – you may not need deep testing to happen again;
  • testing as a performance – like an actor in a play;

“My boss would never go for that” by Alessandra Moreira (talk)

  • Testing and the art of persuasion;
  • ideal world != real world: managers not always know about testing or even the skills a tester has;
  • persuasion: process by which a person’s attitudes or behaviour are influenced by communications from other people;
  • persuasion != coercion; manipulation; one way street;
  • pesuasion = influence; guidance; negotiation;
  • if you present your evidence in a certain style they prefer to hear it – visual;
  • “grow your credibility as a craftsman” James Bach;
  • know your craft:
    • build credibility
    • part of being a good tester
    • how do you know your way is better
    • keep learning
  • build a solid case:
    • arguing skills can be learned
    • consider your boss’ point of view and business priorities
    • gather supporting evidence
    • other possible questions?
  • communicate clearly:
    • use your manager’s communication style
    • present compelling evidence
    • link your priority to your boss’
  • compromise:
    • listen and include different perspectives
    • prepare small compromise in advance
    • start small
    • be prepared to incorporate new ideas
  • things to remember:
    • resisting change is natural
    • most managers want you to succeed
    • be patient
    • find a support system
  • common mistakes:
    • upfront hard-sell
    • too many buzzwords
    • assuming it is a one shot
  • so:
    • figure out why, build a sound case, know your craft, prepare, compromise and be patient.

“Scaling up with embedded testing” by Trish Khoo (keynote)

  • The more we could do at the start, the more challenging testing gets because not many obvious bugs are propping up;
  • developer find and fix, tester verifies expectations;
  • [at Red Gate, at least in my current team, I consider myself quite lucky regarding this as developers are often asking “How are we going to test this?”];
  • take tester our of the feedback loop between dev/tester/product owner and the feedback loop becomes a lot quicker;
    • pivotal
      • “we don’t actually have a QA department” (Elizabeth Hendrickson)
      • developers doing exploratory testing
      • eliminating the time spent explaining what has been done
    • microsoft
      • “it should be hard to find bugs” (Alan Page)
      • ultimate goal in a software engineering team is to engineer software
    • google
      • “quality is team owned not test or QA owned” (Michael Bachman)
  • raising the bar:
    • how can we as testers improve, and start using our, now, free time to provide value?
    • you can’t expect developers to know about testing practices from one moment to the other;
  • learn about human computer interaction, statistics and modelling people.

“Psychology and Engineering of Testing” by Ilari Aegerter and Jan Eumann (slides) (talk)

  • Testers and programmers mutually respect each other – both parties bring a good variety of skills;
  • PTE Agile Testing Manifesto
  • pair on tasks – pair on exploratory testing with developers;
  • educate the team about testing – workshops, dojos, katas, etc;
  • skills needed to make this happen:
    • technical awareness (coding, reading code, database expertise, test environments, service configuration, etc.)
    • domain knowledge

“Paint like an engineer – skills in testing” by Alexandra Casapu (talk)

  • Best way to work on skills is to do something, get feedback, improve it and keep doing this;
  • discuss the method, and apply engineering methods;
  • taxonomy of skills:
    • heuristic does not guarantee a solution
    • it may contradict other heuristics
    • it reduces the search time for a solving problem
    • its acceptance depends on the immediate context instead of an absolute standard
  • do a “my personal testing skills”
    • categories: human-human interaction, attitude determining, rule of thumb, information visualisation, risk-controlling
  • use quiet times (Christmas, other people’s vacations, etc.) to learn or do new stuff;
  • skills you’re not using you will lose them;
  • skills are a procrustean bed (procrustean myth);
  • not just learning a skill but interaction between these skills;
  • nurturing skills – choose some areas; learn the queues that lead to mistakes; recognise when need help.

“Patterns of automation” by Jeff Morgan, founder of leandog.com (talk)

  • Specification by example:
    • specification – user story with acceptance criteria
    • implementation – code with unit tests
    • verification – automated tests
    • duplication in the above points
  • PageObject pattern:
    • web service, web apps, mobile application, data warehousing app
    • stops the need to go and fix tests that have broken in different places – makes it easy to fix in just one place
  • Default Data pattern:
    • use ruby gem to complete data we don’t care about
    • just testing for one thing – focus on the things that really matter and remove the noise
  • Test Data management
  • Route Navigation
    • we do this all the time to achieve something
    • the way we navigate may change things
    • pattern helps us navigate through different paths to reach the same point (i.e. page in a website(s))

Day 2

“STEM to STEAM – Advocacy to Curricula” by Carol Strohecker (keynote)

  • Science, technology, engineering, (arts), math;
  • network for sciences, engineering, art and design – spin off international network;
  • “Need to explore the ontology of software testing” James Bach;
  • “BDD is a design methodology, not a testing methodology”;
  • observing – requires additional patience, concentration and curiosity;
  • imaging;
  • abstracting – focusing, simplifying and grasping essence;
  • innovative thinking;
  • pattern recognition – perceiving information;
  • pattern forming – combining elements/operations in a consistent way that produces a (repetitive) form;
  • analogising – recognising functional likeness between two or more otherwise unlike things;
  • body or kinaesthetic thinking – sensation of muscle, sinew and skin; sensations of body movement, body tensions, body balance (proprioception);
  • empathising – putting yourself in another’s place, getting under their skin, standing in their shoes;
  • new tools:
    • modelling: represent something
    • playing: practice
    • transforming: “segwaying” from one of the “tools” to the other
    • synthesising
  • intuition, inter-disciplinary work;
  • “Inventing Kindergarten” by Norman Brosterman and Friedrich Froebel

“Testing in an agile context” by Henrik Andersson (talk)

  • Definition of checking (Michael Bolton);
  • confirming existing beliefs, check that the code hasn’t been broken;
  • testing != checking, explain the differences between them;
  • removing coding from programmers can extend the feedback loop and decrease value;
  • “Testing is focusing on exploration, discovery, investigation and learning”;
  • “See new information, see things differently, driven by questions that haven’t been answered before”;
  • pair on:
    • product owner on design of acceptance test
    • on doing exploratory testing session
    • understanding the customer
    • programmer on checking
    • programmer to understand the program
  • be a coach on testing for the whole team to provide other perspectives;
  • pick up, try and learn new testing stuff otherwise you will run out of wisdom to share;
  • do what you can to be valuable, if you can’t do something else, move on;
  • “I’m here to make you look good” – change this to “make us”;
  • the A-Team works because it has one “crazy” guy – be that guy and test boundaries;
  • session-based test management
    • way to manage exploratory testing from Jon and James Bach
    • charters
  • tester velocity:
    • number of sessions available over the sprint
    • planned out of office
    • planned other things
    • actual available number of sessions
  • make you stand-up contribution useful, use a scrumboard for example;
  • metrics on test time used – bug investigation, set up, learning performance, integration, “testing”, etc.

“There was not a breach, there was a blog” by Ben Simo (keynote)

You should really watch the full keynote here. My notes are very short on this keynote, mainly because Ben managed to get everyone’s attention constantly as a great speaker, and his content shows clearly his passion for the craft of testing that we should all aim for.

  • Security wall can only be there if there is security at every point;
  • system under scrutiny was already exposing itself in usernames – potentially it isn’t a problem but for healthcare.gov it definitely was – same with email addresses;
  • explore in browser developer tools – loads of information could be leaking through web requests;
  • watch out for security reset questions;
  • rest service outputting the GUID (for resetting password via email) on the browser web request;
  • do not include user and GUID reset code in the same email;

“Thinking critically about numbers: defence against the dark arts” by Michael Bolton and Laurent Bossavit (workshop)

A workshop from Michael Bolton doesn’t come around all that often. To no surprise the room reached its capacity fairly quickly so it was just a matter of time to start it all off.

Here are the key points of the workshop:

  • We were asked to investigate a number of different claims;
  • we had to note what it would take to change our mind, whether we thought the claim was true or false;
  • heuristics and tactics used:
    • locate the primary source
    • finding cited work quickly
    • image search for variants of the same chart
    • search by date to find when a claim was first made
    • original data allows you to plot your own chart
    • locate other claims about the same phenomenon
    • search for quotations
  • two tracks about the world
    • claim
      • construct
      • observation
      • metric
      • generalisation
      • representation
      • interface
    • claimant
      • existing bias
      • sources
      • trustworthiness
      • motivation

“Software Testing state of the practice (and art! and science!)” by Matt Heusser (keynote)

  • Have we swung the pendulum too far to programming with agile?;
  • automation and technology is increasing across society;
  • intractable trasience of test;
  • we have a high turnover in test, perhaps that’s ok? perhaps we need to plan accordingly;
  • fragmentation within testing – different keynotes talking about testing as different things;
  • we use words we can’t agree on;
  • result is an ignorance of test;
  • healthcare.gov was probably tested very well but they were “told off”;
  • social structure makes hard to make corrections;
  • in software development, agile (scrum, careful with assumptions) is winning;
  • scrum was created because of constant change of requirements, vision, etc;
  • xprogramming – need for shorter releases;
  • scrum sasy that testing should be done by embedded members of the (development) team;
  • how to win big? save the scrum;
  • system thinking skills applied to getting rid of queues and wait states;
  • honesty (smart empowered testers can do amazing things);
  • experts on context

CAST was an amazing conference, one of the highlights of my year. I hope to attend again at some point. Hopefully you have got something out of my notes. The call for participation is now open and you can check it here.

Advertisements

Agile Testing Days 2013 – Day 0 (Tutorial) Notes

Agile Testing Days in 2013 was the first conference I went to since I started my professional career in the software industry.

For those of you that don’t know Agile Testing Days is an annual conference held in the beautiful city of Potsdam, which is situated around 15 miles south west of Berlin, Germany. It’s considered by many as the best software testing conference in Europe, perhaps on a similar level to EuroSTAR (its location changes every year as far as I know).

So the conference is held Monday to Thursday with the first day being full of tutorial tracks. In this post I will cover the full day tutorial which I attended which was called “Exploratory Testing in Practice” by Matt Heusser and Pete Walen. Being quite new to the whole testing community, I did not know a lot about these 2 guys, but since then I’ve learned an awful lot from them and I am really pleased I picked the tutorial I did back then.

The day started off with people arriving in the tutorial room which was a mess. Well, a mess on purpose. The game we played, alongside the dice game which at first did my head in, was to rearrange the room following different rules and instructions, and also working out who else was in the tutorial (from a professional point of view and also personal). It certainly set the tone up for the day as the rest of it was again full of games and activities rather than listening to Matt and Pete speak about testing, which in my opinion is how a tutorial should be run!

One of the best activities we did during the day was play battleships. If you are not familiar with the game, it’s purpose is that you play against someone else (or pair with someone else so you play 2v2 like we did) and you have both an attacking and a defensive grid that is 10 by 10. In the defensive grid you set up your ships and other “vehicles” (jet fighters, etc.) and in the attacking grid you plot out the hits or misses you’ve got from the team you playing against. Each team has a go at a time and the first team to successfully defeat the other team’s vehicles wins.

The point of the exercise was that one team had to follow a previously planned script. So for example, you could roll the dice to add some randomness into your script, then follow a path to get picks on the opposing team chart. The opposition in our case was allowed to take an exploratory approach, so they would be basing their attacks on previously learnt information (if they had hit something on position B8, then they would most likely try to hit something else again on B9 and B7, and possibly on A8 and C8 (you weren’t allowed to position vehicles diagonally. To our surprise, we actually managed to defeat the opposition team even though we had to follow a script. In fairness, our script crashed after a few goes and we had to make some changes – so exploratory was the default winner I guess.

Since then, I have actually used the battleships exercise during an interview, where the interviewee was someone with a lot of experience in software testing but one of the things we wanted to find out was around exploratory testing practises and what this person thought exploratory testing was all about. I truly recommend this as a possible testing kata or to use during an interview (although bear in mind that not all cultures are familiar with this game and it can take some time, even though it covers a lot of ground, from the “is this candidate asking questions before he jumps in the puzzle and solve it?” point of view to the more concrete exploration vs. script one).

The second part of the tutorial we were given a fair bit of information like:

  • quick attacks that can be used during exploratory testing sessions;
  • Michael Bolton’s stopping heuristics;
  • mind map tools: xmind, mindmup;
    • mind maps allow for visibility and to create conversations;
  • SF D(I)POT mnemonic:
    • Structure: view source, ss/https;
    • Functionality: change colours, drag and drop, search, duplicate;
    • (I)nterface: csv export, credit card, slow connections;
    • Platform: browser;
    • Operations: resize browser, back and forth between pages;
    • Timings: logout timings, real time updates;
    • (User types): user, guest, owner;
  • Cern Kaner website;

I also recorded 3 quotes from Matt:

  • “When sequence makes a difference we have to consider it”
  • “You are guaranteed to miss things that are not in the context.”
  • “In a competitive market place you need software that is beautifully crafted, rather than a set of screws together tap tap tap done! kind of thing”

I thoroughly enjoyed the tutorial day, and hopefully did it justice with this set of notes. But most importantly I got to meet 2 guys (amongst many others during the day) who have become people I admire in the software testing community.

London #TesterGathering with Michael Bolton

Earlier this year I was invited to go down to London to attend the monthly tester’s gathering organised by Tony Bruce. It was the first one I went to, and unfortunately I haven’t had the chance to attend any others mainly due to lack of time and the starting time of the meet up is also not ideal for someone that has to travel from Cambridge, where I am based at the moment. Seeing that we also have Cambridge testing meet ups happening regularly my plan is to start attending those instead.

But, this one back in February I couldn’t say no to, as it was my first opportunity to listen to Michael Bolton speak about testing so I thought I’d give it a go.

The meet up happened in the downstairs area of a London pub, which quickly became quite overcrowded to listen to Peter Marshall speak about testing the overhaul of a legacy web app that only had a handful of automated tests as a safety net.

From what I could gather, the audience was a bit of a mix: some test leads/managers, some testers including people that never attended any community events before, to recruiters which was a little bit annoying but I guess some of them were sponsoring the meet up so I guess it’s only fair they were “allowed” to be there doing their job.

It almost felt like the whole thing was staged in the fact that the first talk of the evening was partly about “automated tests” as this allowed Michael Bolton to have his say on the topic, which I found fascinating and resonate a lot with.

If you haven’t heard of Michael Bolton before then you should head to his website and read the number of posts he has done on the topic of test automation (or checking vs. testing, like this one).

Some of the main points Michael raised up were that testing is different to checking, and that because the two are different (of course by his definitions, which I agree with) automated testing is impossible. You can automate checking but you can’t automate testing, as machines are still not capable of learning. However, it’s important to state that checking is part of testing too.

Machines are very good at what they can do, and automated checking is certainly something you could have in your “testing toolkit” as a technique, but you must be careful when selecting when to use it – right tool for the right job law here.

Michael also said he uses tools to do a lot of stuff these days, such as data generation, logging, probing, test execution and that those can also be oracles for automated checks.

He mentioned a blog post he made on the motive for distinctions which you can access here.

The “Deep Blue” computer chess player having a bug that allegedly led to it beating chess champion Kasparov was also referenced, but in all honesty I can’t remember the exact context here.

Unfortunately I couldn’t stay for the whole duration of his talk but the last piece of information I gathered before I went my way was about using feelings as oracles when testing.

I’m looking forward to attending more community meet ups and will be sharing my takeaways, notes and/or comments here afterwards.