Comments

Some time ago a client asked me some questions about spies and mocks. I wanted to share what we discussed with you.

So here’s the issue my mind has been toiling over…

The project I’m on is using Jasmine for BDD. Technically though, I think most people aren’t actually executing real TDD/BDD. As in, they’re not letting the tests guide their design, but instead are sticking on unit tests at the end, after writing most of the code… this is what their tests suggest, at least.

I see, in their tests, a lot of spies and mocks. This tends to worry me,… especially the spies.

I see a lot of it as unnecessary, and even damaging. They appear to be reducing the module that they’re testing to nothing more than a series of spies and mocks. The thing they’re testing seems to bear little resemblance to the real run-time module.

From my perspective, mocking is very good and even essential in the cases of module dependencies that:

  1. Would add too many extraneous variables to the testing environment
  2. Add lag to the tests
  3. Are not semantically tied to the thing we’re testing

Examples I like are database mocks, ajax mocks etc.

But spies…. I’m very unsure of the value of spies.

The tests I’m reading are creating a series of spies… in fact, every method of the module is spied.. even private methods. The tests will call some public method (fir example initiatePriceFeed()), and then assert success by ensuring that certain spied methods have been called. This just seems to be testing the implementation… not the actual exposed behavior, which is what I thought BDD/TDD was all about.

So finally, I have a few questions:

  • What is the best way to decide whether a spy is necessary?
  • Is it ever acceptable to test the implementation, instead of exposed behavior? (for example spying on private methods)
  • How do you decide what to mock and what not to?

I am sorry for the length of this email. There seem to be so many things I’d like to say and ask about TDD.

Note! In the Javascript world, it’s common to talk about “spies” rather than “stubs”. A spy and a stub do the same thing. They only differ in intent. In what follows, you can treat “spy” and “stub” as synonyms with, I think, no risk of confusion.

That sounds common. I started doing test-first programming, rather than test-driven development. I probably spent two years focusing on tests as tests before I felt comfortable letting my tests guide my design.

I think the people writing all these spies and mocks do this because it “seems right”. People they respect do it. They need to spend some time practising the technique, so they do it at every opportunity. This corresponds to the Novice/Advanced Beginner stages of the Dreyfus Model: either they just want to practise the technique (Novice), or they feel comfortable using spies/expectations1, and treat every opportunity as an equally (Advanced Beginner) appropriate time to use them. Good news: this is a natural part of learning.

Where to go next? Find one example where a module would benefit from depending on data, rather than another module. I go back to the difference between Virtual Clock (spy on the clock so that you can make it return hardcoded times) and Instantaneous Request (pass timestamps directly, rather than the clock, pushing the clock up one level in the call stack). Perhaps this will help people start to question where they could change their approach.

IMPORTANT! Instantaneous Request isn’t necessarily always better than Virtual Clock. Which you choose is less important than the discussions and thoughts that lead you to the choice. Also: starting to use Instantaneous Request over Virtual Clock means that the programmer is evolving, not the code. What matters is not “use fewer spies”, but rather “don’t let spies become a Golden Hammer”. Spies still help, I use them frequently, and I wouldn’t give them up.

I wrote about this approach in some detail in “Beyond Mock Objects”.

Regarding the value of spies, I don’t consider spies and expectations much different from one another. A spy is merely an expectation that doesn’t verify which methods were called—instead it waits for you to do that. In some tests, it’s not important to verify what happened, but rather to provide a hardcoded answer for any method our Subject uses. One rule of thumb: spies for queries, but expectations for actions. This works because we tend to want more flexibility in our queries, but more precision in the actions we invoke. Think of the difference between findAllOverdueBalances() and findAllBalances().selectBy("overdue")—it doesn’t matter how I find all the overdue balances. Spies simply make it easier to hardcode 0, 1, a few, or a large number of overdue balances, as each test needs.

So: spies for queries, but expectations for actions.

Spy, then Spy, then Spy…

I understand your concern about series of spies, but let me check that I understand what you mean. When you say a series of spies, do you mean spying on A.getB() to return a spy B, whose B.getC() returns a spy C so that you can spy on C.theMethodIFindReallyInteresting()?

As for ensuring that spied methods have been called, those “spies” become expectations, and it can feel like those tests only check the implementation. That’s OK. If the implementation is so simple that we can check it with a simple test, then that’s good! It’s like double-entry book-keeping in accounting. If the tests are complicated and only check implementation, then that usually points to a missing abstraction, or at least, obsession with unnecessary details (could be a missing abstraction or could just be an unnecessarily complicated API). This last point is an example of not listening to what the tests are trying to tell you.

Programmers generally have this feeling eventually that expectations mean “I’m just checking the implementation”. I had the same feeling once, so I asked myself, “assuming that this actually makes sense, what am I missing?” Well, if the interactions between objects were simpler, then this “checking the implementation” issue wouldn’t cause any real problems, would it? In fact, it would only clarify what we’re trying to do. Maybe, then, when checking the implementation feels weird, we could ask about potential underlying design problems, and if those problems disappeared, then we’d feel less weird. This is one of those cases.

Go to a few tests where you feel weird in this particular way, and look for duplication between the examples. You might be surprised!

When Is A Spy “Necessary”?

You ask about “the best way” to decide whether a spy is necessary (maybe appropriate). I don’t know of One Best Way. I use them, then let duplication drive changes. I especially look for duplicating unnecessary details in the test. If I have to duplicate details in a handful of tests, just to be able to check some other part of the system, then perhaps I have two things in one place, and when I separate them, the corresponding spies become much simpler, and sometimes I can replace a spy with data (from Virtual Clock to Instantaneous Request).

Is It Ever Acceptable…?

You also ask whether it is ever acceptable to test the implementation instead of the behavior. “Is it ever acceptable…?” questions almost always have the answer “yes”, because we can always find a situation in which somewhat becomes acceptable. On the other hand, I don’t typically spy on private methods. If I need to know that level of detail in a test, then the test is trying to tell me that A cares too much about the internals of B. First, I try to remove unnecessary details from A’s tests. Next, I look for duplication in A’s tests. Especially if I spy on the same functions in the same sequence, that duplication points to a missing abstraction C.

So When to Mock?

I have two answers to this question. First, when do I use spies/expectations compared to simply using “the real thing”? I like to program to interfaces (or protocols, dependingon the language) and I like to clarify the contracts of those interfaces, something that expectations help me do effectively. To learn more about this, read the articles I list at the end related to contract tests. Especially read “When Is It Safe to Introduce Test Doubles?”.

Finally, when I’m not sure whether to use a spy or an expectation, I go back to the rule of thumb: spy on queries, but expect (mock) actions.

References

Wikipedia, “Dreyfus model of skill acquisition”. Not everyone likes this model of how people develop skills. I find it useful and refer to it frequently in my work.

c2.com, “Virtual Clock”. An overview of the Virtual Clock testing pattern, with further links.

J. B. Rainsberger, “Beyond Mock Objects”. I use test doubles (mock objects) extensively in my designs and they help me clarify the contracts between components. Even so, using test doubles mindlessly can interfere with seeing further simplifications in our design.

I apologise again for not having collected my thoughts about collaboration and contract tests into a single work. I need to find the time and energy (simultaneously) to do that. In the meantime, I have a few articles on the topic:

  1. In order to avoid confusion with the generic concepts of “mock objects” (better called “test doubles”), I use the term expectations to refer to what many people consider a mock: function foo() should be called with arguments 1, 2, 3.

Comments

I think that programmers worry far too much about design.

No, I don’t mean that they should care less about design. I think that programmers worry so much about design that they forget to just program. As they try to learn more about how to design software well, they become more reluctant to write code, fearing that “it won’t be right”. I think that we contribute to this reclutance by writing articles with a tone that implies don’t write code unless you write it my way. I don’t think we mean to do this, but we do it nonetheless.

What if we thought about design a slightly different way? Let’s not think about design principles as constraints for how we write code, but rather as suggestions for how code wants to flow. Focus your energy first on writing correct code, then use the principles of design that you’ve learned to guide the flow of code from where you’ve written it to where it seems to belong. If you prefer a more direct metaphor, then imagine you’re writing prose. Rather than obsessing over the rules of grammar on your first draft, use them to guide how you edit. Let yourself more freely write your first draft without fear of “getting it wrong”, then use your editing passes to fix grammar errors, improve clarity and elevate style.

Now you’ve probably heard this before. “Make it work, then make it right, then make it fast.” This constitutes the same advice. So why repeat it? You probably also know that sometimes we need to hear the same advice in a variety of forms before we feel comfortable using it. I’ve been talking in my training classes about “code flow” for a few years, and it seems to help some people feel more comfortable adopting an evolutionary design approach. In particular it helps some programmers avoid feeling overwhelmed by design principles to the point of not wanting to write any code at all, for fear of “doing it wrong”. After all, the more we say that “code is a liability”, the more people will tend to think of writing code as an evil act. That sounds extreme, but so does some of our rhetoric!

When I teach software design—usually through test-driven development—one or two people in the class commonly ask me questions like “Can I use set methods?” or “Can I write a second constructor?” which convey to me a feeling of reluctance to “break the rules”. I really don’t want my course participants to feel like I want to stop them from writing code; on the contrary, I want them to feel more comfortable writing code precisely because they can apply their newly-learned design pricniples to improve their designs easily and quickly over time. I expect them to feel less fear as their design skills improve, because no matter what crap they write in the morning, they can mold it into something beautiful in the afternoon. I have to remind myself to approach code this way, rather than worrying too much about “getting in right the first time”.

An Example

Consider this article on the topic of encapsulation. I like it. I think it explains a few key points about encapsulation quite well. Unfortunately, it includes a line that, out of its context, contributes to this fear-based mindset that I’ve seen so often:

If you ever use a setter or define an attribute of a component from the outside, you’re breaking encapsulation.

I remember myself as an inexperienced programmer trying to improve at my craft. That version of me would have read this sentence and thought I must not use setters any more. This would invariably lead me to a situation where I would refuse to write a setter method, even when I have no other option. (Sometimes tools get in the way.) This way lies design paralysis. When I’ve written over the years about design principles, I’ve certainly not wanted to make it harder for you to write code.

What Should I Do, Then?

Later in the same article, the author writes this:

It’s common in Rails projects to use patterns such as User.where("something = something_else") from controllers or service classes. How do you know the internal of the database to be able to pass that SQL parameters? What happens if you ever change the database? Or User? Instead, User.some_method is the way to go.

I agree to the principle and the example. I would, however, like to highlight a different way to interpret this passage. Rather than thinking, “I should never write User.where("something = something_else")”, think of it this way instead:

I’ll write User.where("something = something_else)" for now, just because I know it should work, but I probably shouldn’t leave it like that once it’s working.

Don’t let design guidelines (like improve encapsulation) stop you from writing the code you need to write in the moment (as a first draft), but rather use them to guide the next steps (your editing). Don’t let design guidelines stop you from getting things working, but rather use them to stop you from leaving freshly-written legacy code behind.

So What’s This About Code Flow?!

Many programmers offer suggestions (for varying meanings of “suggest”) for where to put code. Some programmers write frameworks to try to constrain where you put code, lest you make (what they consider) silly mistakes. This eventually leads even experienced programmers into situations where they feel like they’re stuck in some Brazilesque bureaucracy preventing them from writing the one line of code they need to make something work. (You need a controller, a model, a DTO, a database client object, …) Instead of thinking of “the right place to put things”, I prefer to offer suggestions about how to move code closer to “where it belongs”.

Going back to the previous example from that encapsulation article, I would certainly have no problem writing User.where("role = 'admin'") directly in a controller just to get things working, but I just know that if I leave the design at that, then I will have set a ticking time bomb for myself to explode a some unforeseen and, almost uncertainly, inopportune time. As a result, once I get my tests passing with this poorly-encapsulated code, then I can take a moment to look at that code, ask what does this mean?, realise that it means “the user is an admin”, then extract the function User.admin?. In the process, the details in this code will have flowed from the controller into the model, where they seem to belong.

I have found this pattern repeating itself in all the major application frameworks I’ve ever used: while learning the framework I put code directly into the controller/transaction script/extension point, and after I’ve written several of these and found them difficult to test or change, the details flow into more suitable components, often representing a model or view (or a little bit of each). By understanding my design principles in terms of where code ought to flow, I get the benefits of better design without the paralysing fear that I might “get it wrong”.

So if you need to break encapsulation, just for now, just to get something working, then do it. Just don’t leave it like that.

References

Alexandre de Oliveira, “Complexity in Software 2: Honor Thy Encapsulation”. In this article, Alexandre talks about “real” and “perceived” complexity in software design, which seem to me to relate to Fred Brooks’ concepts of “essential” and “accidental” complexity. He also includes a definition for encapsulation that I hadn’t read before, and that I quite like. Enjoy the article.

Comments

I don’t intend to argue Alistair’s contention one way or the other, but I invite you to set aside some time to read David Parnas’ paper “On the Criteria To Be Used in Decomposing Systems into Modules”, which I have embedded in this article. Do not let yourself be put off by the quaint-sounding title. If you prefer, think of it as titled “The Essence of Modularity”.

I care about this paper because I strive for modularity in designing software systems and I find that programmers routinely lose sight of both the what modularity offers them and what it means. I value modularity as a way to drive down the cost of changing software. I value that because most of what we do as programmers consists of changing software, and so it strikes me as a sensible place to economise.

If you can’t take the time to read the whole paper now, then let me direct you to a particularly salient part of the conclusion.

We have tried to demonstrate by these examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.

Enjoy the paper. In case the embedded viewer doesn’t work for you: click here.

References

J. B. Rainsberger, “Modularity. Details. Pick One.”. We introduce modularity by refusing to let details burden us.

Martin Fowler, Refactoring: Improving the Design of Existing Code. A classic text that takes an evolutionary approach to increasing modularity in a software system.

Comments

I wanted to change some of the styling at jbrains.ca, but I have a legacy Wordpress template, so I needed a way to start making incremental changes with something remotely approximating tests. I knew that I didn’t want to have to crawl every page to check that every pixel remained in the same place, in part because that would kill me, and in part because I don’t need every pixel to remain in the same place. I needed another way.

How to Refactor CSS/SCSS

I chose to replace the raw CSS with SCSS using the WP-SCSS Wordpress plugin. Since I had all this legacy CSS lying around in imported files and I had fiddled with some of it before I knew how the original authors had organised it, I needed to consolidate the CSS rules as soon as possible so that I can change them without accidentally breaking them.

First, I created one big CSS file (the “entry point”) that imports all the other CSS files. Then, in order to use WP-SCSS effectively, I needed to move the importants into a subdirectory css/, so that I could generate CSS from only the SCSS located in scss/. This meant changing some @import statements that loaded images using a relative path. I fixed those with some simple manual checks that the images load correctly before and after the change. (Naturally, I discovered the problem by accident, then fixed it.) At this point I had one big CSS entry point that imported a bunch of other CSS files in from css/. I committed this to version control and treated it as the Golden Master1.

Next, I copied all the CSS “partials” into genuine SCSS partials and changed the entry point to import a single generated CSS file. I created an SCSS entry point that imports all the SCSS partials. This should generate the same CSS entry point, but get rid of all the little generated CSS “partials”. It did. I committed this to version control.

Now I can freely change my SCSS, generate the CSS, and check the git index for changes. As long as only the SCSS changes and the generated CSS doesn’t change, I definitely haven’t broken the CSS. If the generated CSS changes, then I check the affected web pages by hand and either undo the change or commit the generated CSS as the new golden master.

I hope this helps you deal with your own legacy CSS. You know you have some.

  1. This refers to the Golden Master technique where we check the result once by hand, then compare future versions automatically to the hand-checked “golden master” to detect changes. It’s like testing.

Comments

I have written elsewhere that people, not rules, do things. I have written this in exasperation over some people claiming that TDD has ruined their lives in all manner of ways. Enough!

People, not rules, design software systems. People decide which rules to follow and when. The (human) system certainly influences them, but ultimately, the people decide. In particular, people, not TDD, decide how to design software systems. James Shore has recently written “How Does TDD Affect Design?” to offer his opinion, in which he leads with this.

I’ve heard people say TDD automatically creates good designs. More recently, I’ve heard David Hansson say it creates design damage. Who’s right?

Neither. TDD doesn’t create design. You do.

I agree. Keith Braithwaite responded in a comment with this.

TDD does not by itself create good or bad designs, but I have evidence (see “Complexity and Test-First 0”) suggesting that it does create different designs.

Keith’s comment triggered me to think about how practising TDD has affected the way I design software systems, of which this article represents a summary. I might add more to this list over time. If you’ve noticed an interesting pattern in your designs that you attribute to your practice of TDD, then please share that in the comments.

More value objects, meaning objects with value-equality over identity-equality. I do this more because I want to use assertEquals() a lot in my tests. This also leads to smaller functions that return a value object. This also leads specifically to more functions that return a value object signifying the result of the function, where I might not have cared about the result before. Sometimes this leads to unnecessary code, and when it does, I usually find that I improve the design by introducing a missing abstraction, such as an event.

More fire-and-forget events. I do this more because I want to keep irrelevant details out of my tests. Suppose that function X should cause side-effect Y. If I check for side-effect Y, then I have to know the details of how to product side-effect Y, which usually leads to excessive, duplicate setup code in both X’s tests and Y’s tests. Not only that, but when X’s tests fail, I have to investigate to learn whether I have a problem in X or Y or both. Whether I approach this mechanically (remove duplication in the tests) or intuitively (remove irrelevant details from the tests), I end up introducing event Z and recasting my expectations of X to “X should fire event Z”. This kind of thing gives many programmers the impression of “testing the implementation”, whereas I interpret this as “defining the essential interaction between X and the rest of the system”. The decision to make function X fire event Z respects the Open/Closed Principle: inevitably I want X to cause new side-effects A, B, and C. By designing function X as a source for event Z, I can add side-effects A, B, and C as listeners for event Z without changing anything about function X. This leads me to see the recent (as of 2014) trend towards Event Sourcing as a TDD-friendly trend.

More interfaces in languages that have interface types. In the old days, we had to introduce interfaces (in Java/C#) in order to use the cool, new dynamic/proxy-based mocking libraries, like EasyMock and JMock. Since the advent of bytecode generators like cglib, we no longer need to do this, but my habit persists of introducing interfaces liberally. Many programmers complain about having only one implementation per interface, although I still haven’t understood what makes that a problem. If the language forces me to declare an interface type in order to derive the full benefits of abstraction, then I do it. At least it encourages me to organise and document essential interactions between modules in a way that looser languages like Ruby/Python/PHP don’t. (Yes, we can implement interfaces in the duck-typing languages, but Java and C# force us to make them a separate type if we want to use them.) Moreover, the test doubles themselves act as additional implementations of the interfaces, which most detractors fail to notice. They might argue that I overuse interfaces, but I argue that they underuse them. Interfaces provide an essential service: they constrain and clarify the client’s interaction with the rest of the system. Most software flaws that I encounter amount to muddled interactions—usually misunderstood contracts—between modules. I like the way that the interfaces remind me to define and refine the contracts between modules.

Immutability. As functional programming languages have become more popular, I’ve noticed more talk about mutability of state, with an obvious leaning towards immutability. In particular, not only do I find myself wanting functions more often to return value objects, but specifically immutable value objects. Moreover, thinking about tests encourages me to consider the pathological consequences of mutability. This happened recently when I wrote “The Curious Case of Tautological TDD”. Someone responded to the code I’d written pointing out a problem in the case of a mutable Cars class. I had so long ago decided to treat all value objects as immutable that I’d even forgot that the language doesn’t naturally enforce that immutability. I’ve valued immutability for so long that, for me, it goes without saying. I reached this point after writing too many tests that only failed when devious programmers take advantage of unintended mutability, such as when a function returns a Java Collection object. I went through a phase of ensuring that I always returned an unmodifiable view of any Collection, but after a while, I simply decided to treat every return value as immutable, for the sake of my sanity. Functional languages push the programmer towards more enforced immutability, and even the eradication of state altogether. I feel like my experience practising TDD in languages like Java and Ruby have prepared me for this shift, so it already feels quite natural to me; on the contrary, it annoys me when I have to work in a language that doesn’t enforce immutability for me.

How has TDD affected the way you design? or, perhaps more importantly, what about the way TDD might affect your designs makes you uneasy about trying it? I might have some useful advice for you.

Comments

I’m publishing this as a “rough cut”, so I apologise to everyone annoyed by having to download a PDF to read this article. I have my reasons. Some of the links in the document don’t work; the links to external web sites, however, should. A handful of people have already suggested improvements and reported problems, which I appreciate, particularly the cordial and civil manner in which they’ve done it. (Hint.)

For those of you on a big-enough screen, perhaps this embedded viewer will work better:

Comments

Recently Bob Marshall opined that refactoring code is waste. This reminds me of passionate discussions from a decade ago about testing: should we classify testing as a value-added activity or as an unavoidable waste? I’d like to change the question a little, but first, allow me to play the ball where it lies.

If you haven’t read Bob’s article yet, then do so now. You’ll find it quite short; I read it in a few minutes. I composed this as a comment to Bob’s article, but it expanded to the point where I chose to promote it to a short article. You might say that I refactored my writing. With that segue manufactured…

Is editing waste for a writer? Why don’t writers simply write the right words/articles/books/sentences the first time? So I think it goes for programmers. I think of refactoring as editing for programmers. Since I plan to refactor, I don’t have to program like Mozart and “get it right” in my head before writing it down. This helps me, because often I don’t see trouble with code until I’ve written it down, even though sometimes drawing its structure helps me enough to spot trouble.

Sometimes problems don’t emerge until long after I’ve written it down and the situation changes, putting pressure on an old choice or negating an old assumption. Absolutely/permanently right-first-time seems to require clairvoyance. Writing any code entails risk.

Even so, I agree that we programmers don’t need to deny our own experience just to fit some arbitrary goal of taking tiny steps and refactoring towards abstractions. (This has got me in trouble with some people who declare what I do “not TDD”. As they wish.) Sometimes I can see the abstractions, so I go there sooner. Sometimes that doesn’t work out, so I refactor towards different abstractions. Often it works out and I’ve skipped a handful of tedious intermediary steps. One could measure my “expertise” in design by measuring the additional profit I can squeeze out of these trade-offs compared to others. (No, I don’t know how to measure that directly.) I think we broadly call that “judgment”.

A Question of Intent

I find refactoring wasteful when I do it out of habit, rather than with a purpose. Nevertheless, I don’t know how to have developed the judgment to know the difference without making a habit of refactoring. (Of course, I like to think that I do everything always with a purpose.) I encourage novices (in the Dreyfus Model sense) to force code into existence primarily through refactoring with the purpose of developing that judgment and calling into question their assumptions about design. That reasoning sounds circular, but I have written and said elsewhere how refactoring helps programmers smooth out the cost of maintaining a system over time. I can only assert that I produce more maintainable software this way, compared to what I used to do, and that refactoring plays a role. I really wish I knew how much of that improvement to attribute to refactoring. Refactoring still saves my ass from time to time, so it must pull some of its own weight.

I would classify refactoring as waste in the same way that I’d classify verification-style testing as waste: since we don’t work perfectly, we need feedback on the fitness of our work. Not only that, but I refactor to support not having to future-proof my designs, because of the waste of building structures now that we don’t intend to exploit until later. Which waste costs more? I find that open question quite interesting.

References

Bob Marshall, “Code Refactoring”. In his article, Bob surmises that programmers can’t quite “get it right” in their heads, and highlights refactoring as potentially a self-fulfilling waste: if we assume that we have to live with it, then we will choose to live with it. I leave the parallel with #NoEstimates as an exercise for the reader.

Gemma Cameron, “Is Refactoring Waste?”. I noticed Gemma’s article on Twitter and it led me to read Bob’s. She mentions that she plans to experiment with a TDD microtechnique that I use often: noticing while ‘on red’ that a little refactoring would make it easier to pass the test, and so ignoring (or deleting) the test to get back to ‘green’ in order to refactor safely. I don’t always do this, but I consider it part of the discipline of TDD and teach it in my training courses.

“The Dreyfus Model of Skill Acquisition”. Wikipedia’s introduction to the topic. All models are wrong; some models are useful. I find this one helpful in explaining to people the various microtechniques that I teach, when I follow them and when I don’t.

J. B. Rainsberger, “The Eternal Struggle Between Business and Programmers”. The article in which I make the case for refactoring as a key element in reducing the cost of adding features to a system over time.

Comments

If you like to teach test-first/test-driven techniques, then you have probably stumbled over configuration problems in your programming environment. This happens to every presenter at least once. We have a variety of ways to handle the problem, and today I’d like to share another one with you.

Google Spreadsheet can run Javascript, which means that we now have a ready-to-go approximation of Fit. It took me only a few minutes to figure out how to write a Javascript function that operates on the value in a spreadsheet cell. After that, I recorded ten minutes of video demonstrating the environment.

Using Google Spreadsheet as a simple TDD/BDD environment from J. B. Rainsberger on Vimeo.

Yes, I wrote the code test-first, but not test-driven. Please hold your cards and letters. I wanted to demonstrate a testing environment and not TDD. Even so, with ATDD or BDD, we programmers often receive a batch of customer tests and make them pass one by one, and sometimes we don’t need additional programmer tests to have confidence that we’ve built things well. Looking at the final design for this solution, I don’t think that a strict test-driven approach would have improved anything. If you disagree, then please share your approach with us!

Comments

So maybe not an epic, age-old battle, but a battle nonetheless. The battle over the correct ordering of the Four Elements of Simple Design. A battle that I’ve always known I’d won, but until recently, could never justify to others. I can finally declare victory and bestow upon you the One True Sequence.

Calm down; I’m joking.

Seriously: it has always bothered me (a little) that I say it this way and Corey Haines says it that way, and somehow we both have it “right”. I never quite understood how that could happen… until now.

Read on →
Comments

I have improved the code samples to make them closer to valid, working Python.—jbrains

At some point, you know I had to write this article. Let me state something clearly from the beginning.

In spite of the title, I use mocks freely and happily.

I do not intend with this article to join those who either shun mocks or appear to shun mocks only to praise them. I plan to share a simple example of using mocks as a stepping stone towards a potentially more suitable design, but you must not interpret this as a message against mock objects.

There. I feel better now. I trust you to carry my message the way I intended it. Do not betray that trust.

Read on →