Skip to content


Tag: Unit Test

Migration issues

Upgrading to XCode 5 is causing me headache; our setup for unit testing, migrated from XCode 4, doesn’t work.

“RunUnitTests is obsolete”

In XCode 4 (and earlier), the TEST_AFTER_BUILD flag allowed running tests after building a test target. In XCode 5 we receive the following message if the flag is set to YES:

error: RunUnitTests is obsolete. To run unit tests for your target, use the Test scheme action in the Xcode IDE and the test action in xcodebuild.

So, we have functionality that is now phased out, and we are left with concise hints regarding alternatives:

  • use the Test scheme action in the Xcode IDE
  • the test action in xcodebuild.
I am assuming that it is still possible to run tests after building, and maybe the “test action in xcodebuild” has made the TEST_AFTER_BUILD flag somewhat redundant. After poking around I didn’t find anything well baked so I’m looking at XCodeBuild
Using xcodebuild to test
Looking at the man page for xcodebuild here I cooked up input to run my tests:
xcodebuild test -workspace MY_WORKSPACE.xcworkspace -scheme MY_SCHEME  -destination ‘platform=iOS Simulator,name=iPhone’
The embarrassing part here is the destination setting. In XCode 4 you could run so called “logic tests” without booting the simulator. Setting the platform key to “OS X” does not help. We get the following message:

The run destination My Mac 64–bit is not valid for Testing the scheme ‘SCHEME_NAME’.
The scheme ‘SCHEME_NAME’ contains no buildables that can be built for the SDKs supported by the run destination My Mac 64–bit. Make sure your targets all specify SDKs that are supported by this version of Xcode.

Sadly, this makes sense. We have libraries (and tests) built for iOS (checking ‘Architectures’ in build settings), so we’d have to do some (re)configuration to get this to work on OS X. The question is: how did it work before?

I can live with this for now.

Calling xcodebuild after running a build?

Pre/post-actions won’t do

Under ‘edit scheme > build ‘ (unfold it), we find ‘Pre-actions, build, post-actions’. I’m hoping that I can run xcodebuild using a post-action. While not ideal (for lack of integration with… xcode) I don’t have a better idea at the time.

It took me a while to convince myself that pre-post actions are actually being run. A comment on SO explains why:

“In Xcode 4 [and 5] you could run this script as a pre- or post-action script on the “Archive” action. However, pre- and post-action scripts are not considered part of the build itself; their output is not part of the build log, and a failing script will not cause the build to fail.”
( )

This is no good. I don’t just want to run tests and get no output, let alone have broken tests go un-noticed – quite the opposite.

adding a build phase

So, instead of a post-action, I added a build phase: editor > add build phase > add run script build phase. XCode doesn’t hide the output of a custom build phase. Additionally, running tests as part of a build phase is actually helpful:

  • If a test fails, the custom build phase – and the build – also fail.
  • Somehow, XCode manages to open the failing test and highlight the emitting assert.

My build phase script is simple, looking like this:

cd DIR
xcodebuild -workspace MY_WORKSPACE.xcworkspace -scheme MY_SCHEME -destination ‘platform=iOS Simulator,name=iPhone’ test

where DIR is the directory containing the target workspace.

Yea that’s it. But there is a caveat. You probably can’t include this as part of the build action for the project that you are testing. This is because the test action tends to invoke the build action, causing recursive invocation of the test action and build action.

I’m guessing it could be done using an additional target in the same project. Because the project realising our app does not really contain code, what I do is add the custom build phase to that, calling xcodebuild for every library needed by the app.

This article is a revision of a previous article I wrote about unit testing with XCode.

I describe the steps carried to setup a test target long after a project has been created, so initially the test target will lack most of what’s needed to operate correctly (e.g frameworks, libraries and source files)

Setting up may take up to a few hours.

Creating a test target isn’t a big deal (just go to the project panel, add a new target and follow the steps).

I expect tests to run every time I try to build/run the main target. If the tests won’t pass I’d rather not fire up the simulator.

By default, the test target is not setup to work this way. Maybe it makes sense (after all, how does it know which target to depend on).

  1. Go to project panel => build phases ; add the test target under target dependencies.
  2. In the project panel, open ‘build settings’ for the test target, search for and tick ‘Test After Build’.
Note: You can also edit the scheme for the main target; under ‘testing’, add the test target. However doing it this way won’t action tests every time you run the application; you’d have to select the test action instead of run (top-left of xcode window, press and hold the ‘run’ button).
Up to ‘housekeeping’, the steps described below are likely necessary. Try to build before/after every step and check build messages to help you move on.

1. Adding source files

Maybe the fastest way to add your .m files: go to Build Phases and scroll to ‘source files’, then press add. type *.m as search filter. If you have many groups you may need to do this a few times to include all the required files.

2. Disable/Enable Automated reference counting.

In your test target, under build settings, search for ‘Reference counting’ and tick/untick “Objective-C Automatic Reference Counting”

3. Replace/Edit precompiled headers

In build settings, search for ‘pch’. If possible I just copy the *.pch from my main target settings to the test target settings.

4. Add the required frameworks & libraries

Most of this should be copied from your main target. Opening the assistant editor so you can compare/check both targets at once is useful (see build phases > link binary with libraries).

If you miss a library it will result in countless unresolved dependencies so it’s easy to figure.

5. List additional headers

If you’re pointing at additional headers in build settings, ideally you can share definitions by declaring them in the project’s (not the target’s) build settings – target-specific entries can be overwritten using $(inherited).


Keep your framework/library references tidy: Checking the list of frameworks/libraries included in your project (there may be some in several places) may be a good idea as (harmless) duplicates may arise.

With frameworks/libraries, getting ‘red files’ is not rare. In many cases this doesn’t mean anything is actually broken (sigh…)

Remove unused files: remove files created by the wizard, if unused (e.g. precompiled headers)

Rename/Regroup: Renaming/Regrouping may cause references to break and need further editing. It’s a bit fidgety so you’ve been warned.

I like using generic names for my source folders (e.g. not MyApp Tests/ , just test/ ) and certain files (info.plist instead of MyAppTests-Info.plist). Renaming the info.plist will require editing in build settings etc… so maybe it’s easier to leave things as they are (XCode can help you rename project items).


Beware that some classes in related libraries may rebuild after setting up the test suite; if old files were lingering in the system this may cause errors and confusion. Clean the build if things start looking weird.

This article mainly addresses the rest of us. If you’re not a TDD (test driven development) convert, yet your heart responds to the sirens of unit testing but you find yourself admiring the solution without cracking the nut, you may find it useful. Intermediate agile adopters may skim through as well. This isn’t a tutorial about how to write unit tests.


  • If you’re not in the habit of writing tests, and you can’t get into it, the best time to write tests is when your code undergoes significant or overwhelming changes.
  • Only write mocks or stubs if you find it easier than using actual classes. I know this may sound heretical or misleading. I’ll explain.

It won’t get us anywhere

Unless you start with generous faith and enthusiasm, getting started with unit testing (or worse, TDD) while hitting a new project won’t get you anywhere. My first experience with TDD and unit testing was starting with a weeny-tiny, critical yet project. Finally I put 3 or 4 classes under test (out of ~50). It was interesting but inefficient, and I also ended up doing 4 days overtime.

A unit test isn’t a footnote attached to a bug fix

The big plus with unit testing is that it forces us to design code in (usually) better ways. It doesn’t tell us how to do it. We’re left working it out, and this may turn out to be overly time consuming. Or it may piss you off and you’ll bury all the good stuff under a pile of curses.

In ‘legacy code literature’, unit tests are high on the list, along with suggestions to ‘write a test for every bug fix’. So I fix the bug and write a test and the critter won’t recur – job done. Right?

Uh-oh. The normal story is, as often as not this class isn’t even testable. It needs to be redesigned. That will affect dependencies with other classes (it is precisely because of these dependencies that the class can’t be tested), then you’ll be wanting to re-factor these other classes as well, and you’ll end up giving up, unless you actually call home and sign up for an overnight crusade. Sad but true, better designed code is less likely to break down, and easier to test.

The barbarian and the healer

The good news about badly written code is that it shouts for itself. You’ll be fed up, exhausted. You’ll be a dog that used their wise teeth gnawing a pile of skulls. And then…

You’ll be using the mace.

Maiming your code, changing, removing or invalidating methods, classes, sometimes entire packages. And then…

Well then, any good barbarian needs a healer. So after we made a mess for the good cause, we’ll be looking at the result bitterly, itching to sub-revert to whatever can-of-worms just needed a new expiration date hastily patched over it, and we’re ready for adding tests. Viz…

  • We’ve already agreed to redesign our code for the better (better code) and the worse (overtime).
  • We’ve entirely lost confidence into our codebase. Rewriting everything is a tempting (-ly dangerous) alternative and the prospect of running that code at all is simply frightening.

Am I drunk?

Some will claim that doubles (stubs, mocks, fakes… you name it) are inherently necessary when writing unit tests. I agree. Here’s why.

One foundational purpose of unit-testing is to ensure classes are well implemented.

Now let’s consider a class Acme depending on a class Foo. Suppose you write AcmeTest, carrying that dependency on Foo. Now say somebody changes Foo and AcmeTest breaks down. Well we’ve just proven that AcmeTest is testing Acme and Foo.
Among stories hailing the successes of unit testing, I’m sure you read something like ‘I made a minor change and a bunch of red lights fired up, signaling my change caused unexpected error in (apparently) totally unrelated classes’.

I’ll be surprised to hear that the guys witnessing ‘red lights in CC’ religiously used mocks or stubs. More likely, they had dependency chains ‘spoiling their tests’. These ‘half baked tests’, however, are still popular with some agile practitioners (read here. Yes, here!)

The bottom line is that well designed unit tests using mocks and stubs are testing less, less safely, than half-cooked unit tests depending on actual, production caliber collaborators:

  • Tests using mocks and stubs are indeed asserting whether classes are correctly implemented. These only test interactions between classes to the extent that doubles are ‘locally faithful’ to the original. Foo can evolve independently from FooStub inasmuch as the original contract specified by Foo doesn’t change (mocks are even more flexible than stubs in this respect). Yea. Doubles are basically formal specs lying around. Like documentation, they can get out of date without anything breaking down.
  • Tests using actual collaborators are non-local. We’re not testing just the ‘object under test’. We’re testing its actual collaborators (however indirectly) along with the net sum of interactions between the object under test and the underlying subsystem. This makes it potentially harder to tell what is breaking down. Yet…

Lazy’s Good. Less isn’t more.

Consider an ideal situation (, IT division):

  • We have enough time and discipline to maintain our tests. When a class implementation changes, we review stubs and mocks related to this class.
  • We have a ‘full featured battery of tests’. Unit tests; component tests; acceptance tests.

In this situation, the advantages of mocks and stubs hold dearly true. Since we have component tests and acceptance tests, our unit tests need validate only class implementations, not their holistic sum. Since we have enough time and discipline, our stubs and mocks never get out of date.

Now consider a workable (if-ever stressful) situation (, $5,000 registered capital):

  • We have 2% test coverage, courtesy Adjil Gik Jr (dude’s recently completely an internship with us, at least the test suite is up and running).
  • We’re 8 weeks before release as we got this over-designed product that we just blew a year of angel funding on.
  • The lead dev just decreed curfew (no leaving the office before 12pm) and the whole codebase needs to be ‘improved’.

In the workable, stressful situation, writing these stubs and mocks may just appear to be a waste of our precious time. We may end up writing mocks and/or stubs hastily and we don’t have component testing anyway, so how about writing a few of these half-cooked tests instead, since they double as integration tests?

Doing without mocks and stubs doesn’t just save time. It helps increase test coverage faster. Further, retrofitting a test using doubles likely will require more (re)design (just try with any class that instantiates its delegates). When coverage is low, we’re also likely to find collaborators are difficult to instantiate ‘in vitro’. The redesign effort involved in making it possible to instantiate collaborators without firing up the whole watchamacallit is the first step towards making these classes testable. Writing a stub won’t give you that.

Mock-ists often point out how tests written without doubles cause chain damage (lots of red lights) whenever anything breaks down. But if you’re only relying on unit tests, there are integration level errors that ‘true to form’ unit tests won’t catch. And by the way Mocks and stubs help masking dependency chains; here’s a design twist not encouraging us to modularize code.


The best time to write tests is when the shit hits the fan, and we lost confidence in our codebase. Mocks and stubs are neat, but we don’t usually need them to write tests. Half-cooked tests are easier and faster to write, providing more extensive coverage than mock-tests. True-to-form unit tests merely give the illusion of safety unless you have component testing and/or acceptance tests. By any means, write tests.

Acknowledgement and Disclaimer

This article has benefited explanations from an article written by Martin Fowler (Mocks aren’t Stubs, also quoted inline). Needless to say the views expressed in this article are mine. These views may appear radical, should be taken with a pinch of salt and merely track my slow progression up Agile Hill. Screw up your code at will, don’t blame me.

Considering Unity’s growing(1) popularity (ex Unity 3D), it seems that combining clean, powerful libraries and a polished editor into an integrated, cross platform solution for 3D gaming will take you a long way to popular.

That, the wow factor of beautiful demos and screenshots, and reasonable pricing (the guys unrealistically asking for a 30% cut off your sales — I’m not referring to Apple — only got the wow straight).

Having said that, Unity’s cross platform sirens both imply a lack of interoperability(2) and stem from a proprietary approach that is inevitably reminiscent of the ActionScript/Flash tandem. Technologies that only survived lack of scalability and visceral contempt from the software engineering community thanks to an endless store of uneducated, trigger happy noobs milling banner ads and half baked online entertainment (no offence to respectable Flashers out there, you know who you are).

So when I started looking at Unity seriously, I had questions and wasn’t all too surprised to find scarce answers:

  • Is Unity MVC friendly? (some answers here)
  • Which IDEs can unity be used with? (only Visual Studio(?), using an apparently simple, yet hacky setup I found somewhere)
  • Is Unity compatible with test driven development? (some answers here)

Among other things, I found that Unity prides itself in beheaded (nameless) classes, has no support for packages in C# (hopefully this should be fixed someday) and requires purpose written unit testing technology.

I remember (way back in the days) reading Dave Small on the popularity of Basic versus C/C++ (ST Magazine!). I also remember how Flash (and other technologies like, say Processing) reinvented simplicity by… depriving novice programmers from the burden of OOP. Now consider the heavy hand Adobe had into Javascript 4, an orthodox, dead-in-the-womb(3) OOP flavor that only manic repentance could explain.

We stand corrected. These guys surely did their homework. At this point I’m half-suspecting that some will (maybe rightly) find Unity even easier than flash. In IT, those who know history find shortcuts to success. Or will Caesar recover what belongs to him?

All this naturally leading to a question that only a trial can fully answer: would the hype scale up to a less than complacent effort at building a well formed game framework, or should we all go back to our compiled code?

Let’s wander off topic. I just found a nice discussion about ‘game making tools for artists‘ or such. Now don’t go were all over, them wizards got silver bullets.

(1) At the time of this writing, Unity boasted 1000+ games on the Apple app store.
(2) The bottom line being, even if Unity is more interoperable than I want to believe, tie-ins with platform dependent technologies will bring the ship back to port.
(3) AS3 is almost identical to defunct Javascript 4; Adobe CS4 can even compile Javascript 4 code. So much for scripting.

This article is about using NSLog with XCode unit testing; for a quick introduction to NSLog, follow the link.

Some programmers use traces to debug. Other programmers use breakpoints. Finally, some programmers use both.

So I wanted to trace some output while running (what formally looked like) a unit test, and I didn’t quite find my NSLog output in the console. Then I read this thread on

Where are the traces gone?

OK, instead of [Run > Console], open [Build > Build Results]. Now, whether a build failed or succeeded, this window works like a tree and has lots of items marked green (success) or red (failed!). Pick any of these items, on the right hand side, there is an icon for ‘more details’  or ‘text output’.

Pressing this icon will display the actual console output, along with your NSLog output.

So if you have a test suite X, drill down to the ‘Run test suite X’ item and expand the text output; this will display information for the matching run, along with your NSLog output

This is a quick introduction to unit testing with XCode. This article does not provide a conceptual overview to unit testing.

As a first step. I did the following:

  1. Go to the project browser and select [ add > new file ]
  2. Select ‘objective c test case class’
  3. The file that opens immediately seems to contain a couple of tests. That, and a link to an overview document graciously provided by Apple.
  4. So I click, and read everything patiently (well, I’m planning to. I can display a lot of patience at times) (1).

One thing worth mentioning is that, up to stage 4 and notwithstanding footnote (1), this is a perfect introduction to unit testing, and a perfect approach to documentation. The IDE provides the way to get us started with what we want to do. The code hyperlinks the documentation.

Setting up a target for unit testing

Now to the point, what we want is write a test suite and, minimally (assuming we don’t have continuous integration, yet) wire it to the main project, so the test suite runs whenever we build.

  1. We need a target to run the test suite. Go to [ project > New Target > Unit Test Bundle ].
  2. Make a group to hold your test classes (Optional. We might as well dump everything, everywhere).
  3. At this point I deleted my first test class and re-created it. That’s because we need to add the test class to the test target (NOT to the main target, so uncheck that when going through the wizard) and the easy-clicky way is via the wizard, when we create a new test class.
  4. Switch the active target to Test (or whatever your test target is named)  and target the simulator (otherwise tests will skip).
  5. Press build.
  6. Press [Build]. If you haven’t modified the sample test class, this should run OK.
  7. Now modify the test class to fail a test, and press ‘Build’. This should then fail the build. If you check build errors and keep unfolding, you’ll eventually find a reference to your failed test, along with whatever you set the assert to output when the test fails.

OK, this didn’t work for me at first. In the sample test class, reassign USE_APPLICATION_UNIT_TEST to zero. USE_APPLICATION_UNIT_TEST doesn’t seem to be for unit tests; this is for tests that require your application (or some other GUI, not sure… ) to be running while you test.

NOTE: I have somehow gotten into the habit of stripping #import <Foundation/Foundation.h> off my .h files. Well. If you’re doing this, you’ll get errors when trying to compile your project classes to the test target. You need to add the project’s precompiled header (*.pch) to the build configuration for the test target (see below).

Running tests every time you build the main project

Now, this is trivial and elegant at once. To ensure we pass the tests before even thinking about running our app, all we need to do is drag the test target inside the main target.

Next time we build (targeting the simulator) test classes compile and run before actually running our app. If the tests fail, nothing runs until we fix the tests :)


There are three practical issues that tend to occur when trying to put legacy code under unit test with XCode. In whatever case this results in stupidly broken builds.

  1. Most of your existing source files are missing from the Test target.
    => expand the main target for your project; expand the test target; now select Compile Sources in your main target. select all files in the Compile Sources group (do it from the top right panel) and drag them into Compile Sources in the test target.
  2. Frameworks in use  are missing from the test target.
    => use the same approach as in (1). This time, duplicate content from Link Binary with Libraries.
  3. The precompiled header (.pch file) isn’t applied to the test target.
    1. Right click on the main target and choose ‘get info’.
    2. Select the build tab.
    3. Under Configuration: (top left) pick All Configurations
    4. Scroll down until you find the ‘Prefix header’ entry in the ‘GCC 2.2 – Language’ section. Copy the pch path/name and duplicate it to your Test target settings.

(1) At this stage, not everything being as perfect as it ought to be, I bothered popping a mail window and sending the link to myself. I wished I had an iPad icon on my desktop and could just drag the safari window to it – or the link. Yea. Kiss the future.

…and we’re back!

Here’s two contradictory propositions about test first development (TFD):

  1. TFD helps you to design your software better.
  2. TFD should not change the way you design your software.

At the moment I’m still designing a library for intermediate level developers. Static members, for all the bad they are, often make my programming life simpler without jeopardizing the future of my code altogether, so I guess among intermediate developers some will feel my pain better than I do, and develop a negative understanding of the following, short sighted truths if I do not provide static methods as part of the kit:

  • Factory methods are a good alternative to constructors when constructors are not available (minor)
  • Where one, and one only, instance of a service is, for the foreseeable future, desirable, static methods avoid burdening oneself, or other developers, with the need to bounce a singleton around a software system (major).

As a quick reminder or introduction, here’s why TFD (among several other, theoretical and formal, protagonists) doesn’t like static members:

  1. A unit test is validating a class. This is done to ensure that the object will behave as expected, so that, if this object appears to misbehave in some way, you can rely on the unit test failing or passing to tell you either yes, the object is misbehaving, or no, the object is mis-informed – interacting with something else, and that thing must be broken.
  2. A static member is a member that you cannot, in many languages, replace by a stub. That means that you can’t isolate the behavior of A, using static member S, from the behavior of the system {A,S}.
  3. Since we can’t easily tell whether A or S or {A,S} is failing when A appears to fail, the unit test for A is imperfect. More importantly, {A,S} is harder to maintain.
  4. Ergo, static members are undesirable.

As an API designer, what should I do? Well maybe I’ll change my mind later, but for now, here is how I feel about it:

  1. If I were designing an API for experienced developers, I would most likely avoid statics. I couldn’t use them anyway because the use of static members would taint my API. Not all experienced developers have a deep grasp of why statics might be bad. This is not how humans learn. Most experienced developers feel that static members are bad. This is what developers learn.
  2. Since I am designing an API for intermediate developers, I may be ill inspired not to provide them with convenient static functions. In fact I should wonder whether my static functions are a poor substitute for globals.
  3. I need software written using my libraries to be unit-testable.
  4. Ergo, I shall have to write a stub version of my libraries – an empty shell containing all classes and function signatures in my libraries. This way developers can unit test software written using my library, and enjoy readily available stubs providing support for testing (e.g., logging API calls)

I find the ‘case of untestable static members’ delectable. I may be more against static members that many developers are, but today I’ve been the devil’s advocate, so I’m just about to turn my jacket but…

…not in this post.

Once again I’m about to do the wrong thing – forgo my assumption that stumbling steps are enlightening and not writing this post. I’ve just completed writing a unit test for a foo():Void haXe function with no flow control to its name. More about this another time.

Bear in mind, this article isn’t about what is or isn’t worth testing. My first step into TFD consists in having a fair go at writing tests before I write the code, without breaking my usual design flow – correcting my design flow is OK, if that’s what it amounts to. Correcting, yes – breaking, no.

Can we test the main() method? Do we want to?

I’m taking over a colleague’s work, now I’m sniffing the Main class and thinking hell, what shall I do with it. Test? Well yea, I should test it first. Since there are not unit tests for this project yet, I might as well consider it to be legacy code.

My colleague did something that I tend not to do – they just use Main.main() to instantiate a singleton. I guess it should really be a singleton but the point is, they escaped the static logic behind main. So I’ll follow in their steps and instead of testing main(), I should probably test whatever gets instantiated by main.

Unfortunately, as a good initializer does, mine mostly invokes other initializers. Essentially a constructor is a static function. So yea I could pass an array of  classes and instantiate them by reflection, but I don’t want to do that. Too much trouble.

I had the neat idea to just call the constructor to make sure it works, but that’s not a unit test – that’s actually testing the whole f*** thing. Too integrated.

Why I don’t feel like testing main() and constructors anyway?

I’d like to test how my code changes the software system I’m building. Although I’ll come back to this later and discuss, a function that doesn’t return value isn’t making local changes, so from this point of view there’s nothing to test inside the unit.

So what shall I do?

Flowing with TFD?

Here’s my problem with test first development right now. I need a method that flows. If it flows slowly, fine, but right now my flow is frozen. Let’s get flowing and try this…

  • I’ll always write a test for each function. That’s easier than  asking everytime whether to write a test.
  • If I can’t test the function, at least, I’ll document the test function with my design intention for the non-test function – that’s because I need somewhere to write my design intention. Then I’ll past this doc in the function’s code body so I can use it to write the actual code (that’s because my STM is very limited)

I’d like to still call these functions to make sure they don’t fail. That wouldn’t be a unit test, however. Oh my…

Working notes

No time for a conclusion today – just a few ideas

  • In absolute, I’d really like to write my tests first. This is why I’d prefer to write an ‘untestedConstructor’ function rather than no function at all in the test class. untestedConstructor() may seem horribly pedant, but I like it. This way you know I haven’t tested this by looking at the tests. Also, I can move design comments to the test class – if the tests are gonna stand for documentation, I think that’s alright.
  • So far, I only note I find it difficult to test root initializers. The type dependencies involved make it appear so difficult as to be counter productive. However, even with a statically typed language such as C++, doing this is likely not impossible (just map another .c file to the same .h file?).
  • One view about testing is that only state transitions matter. ‘unit’ testing consists in testing a class. Applying this, we could say that a method that causes no local change needn’t be tested.
  • Surely, a controller doesn’t manipulate it’s own state. A controller should hold no state. Does it mean a controller shouldn’t be tested? Oh my, controllers are were the case logic of a program resides… Maybe a productive idea – how about if we extract the case logic, so that the case decision can be returned from a function as a value. My counter-intuitive bet is that such functions should then be private – unfortunately, I heard over and over that a good unit test focuses on the public interface of a class. More contradiction. Or maybe we should just distinguish ‘do’ classes from ‘case’ classes; then we’ll have a good reason to make ‘case logic’ methods public.
  • Another view about testing might be that it is the behaviour of a class we are trying to test. Even if a class doesn’t manipulate its own state, we still want to assert whether the changes it makes to the outside world are as intended. TDD programmers that use mock objects liberally seem to be on this side.

Enough musings for today.