Tests and Mocks in RSpec

Tests and Mocks

A developer approaching TDD and BDD often feels flooded by new concepts and terminology that pop up in tutorials and blog posts.

One such item is certainly mocking, referring to a specific class of testing framework functionality that allows for replacement of real objects with “stunt doubles”, composed of limited functionality and able to mimic actual instances to a degree.

Mocking is, as with all tools, valuable if applied sparingly and in the correct situation. Many developers new to TDD, with my personal experience included, end up mocking too much and using them in place of stubs or just classical black-box tests. There are also many nomenclatures at play, as shown by this excellent post by Martin Fowler.

Common pitfalls

Improper usage of mocks results in a variety of harmful effects, that range from a mildly unstable testing suite to entirely useless specs.

The most common issue that can arise is writing tests that are too coupled to the class internals. Mocking out too many methods, especially those belonging to the class that is being tested, can lead to a test that either breaks at every slight change to private interfaces, or is an outright false positive, ie. will always result in a passing test.

One such case is exemplified below.

describe '#read_and_parse' do
  let(:contents) { '123|asd' }

  it "returns the correct parsed content" do
    subject.read_and_parse.should == ['123', 'asd'] # assuming the method just splits the file contents by |

This would be a bad test because it does not test the file reading behavior, but only the parsing step.
It is also a case of a method that should probably be split in two parts, but in this situation we want to highlight how if the return format of #read_file changes, we will never find out.

A more severe problem is the self-fulfilling test, which is not an issue partial only to mocking, but tends to arise more often in that context.

describe '#read_file' do
  let(:contents) { '123|asd' }

  it "returns the correct parsed content" do
    subject.read_file.should == contents
def read_file

The above mocked method tests nothing other than the fact that a Ruby method actually returns a value. We will leave that to Matz and the core team.

Shallow testing versus deep testing

Even with the drawbacks, mocking always has a place in unit testing and TDD suite design.

My personal opinion is that mocking should not be applied to integration testing, where a thorough safety net of end to end feature specs ensures the overarching goals of the project are always met and kept in sight.

The decision to use mocks in unit testing is really a choice of style between so called “shallow” testing and canonical testing, named “deep” by contrast.

When doing shallow testing, the developer concentrates on the immediate vicinity of the code being written, by applying mocks to isolate and exercise only a specific part of the system.

Canonical or deep testing instead runs live code as much as possible, concentrating on public methods and their possible outcomes. The deep testing approach is better suited for less atomic operations.

This approach is recommended as the first layer an ideal concentric model, where smaller parts are exercised first. This is especially important when dealing with complex algorithms that are no easily split in smaller methods, and where execution order and specific instructions can matter as much as the overall outcome.

it "should execute an output component with the provided SSML content" do
  component = Punchblock::Component::Output.new(:ssml => content)
  subject.output content

In this case, lifted straight from the Adhearsion test suite, we mock out an external component that would be too complicated to have running live.

More precisely, we avoid having to spin up an actual CallController with a live Call object, which would require too much setup to be useful in this case.

Replace your mocks with live code

Interacting with external resources is the main source of puzzling for the beginning TDD practitioner. Communicating with a network, reading files or accessing a database are all examples of a situation where mocking has the looks of being a good fit, while not accomplishing anything positive in actual practice.

Dealing with files

When tasked with testing a read operation on the contents of a file, a developer could decide on using a mock. That way, it seems, the test stays self-contained and avoids actually hitting the filesystem.

# Mocking File.open(local_filename, 'w') { |f| f.write(doc) }

describe "#write_stuff" do
  it 'writes the specified content' do
    mock_handle = double('file handle')
    File.should_receive(:open).with(local_filename, 'w').and_yield(mock_handle)

The issue is made worse if your test requires a write operation, because a mock looks like a quick way to avoid having to manage temporary files.

Especially if you are using the open, write, close sequence of operations, you end up mocking a lot of methods and not really testing much.

For example, the above test would always pass, but if you were dealing with binary file content, it would fail in actual execution because the ‘w’ mode is not correct.

The recommended approach is instead bundling a sample file with the specs. In case you need to write to the filesystem, use the FakeFS gem.

This way, you can exercise your code on real data, without coupling tests and implementation together too tightly.

describe "exporting to disk" do
  include FakeFS::SpecHelpers

  subject { SomeClass.new }
  let(:contents) { '123|asd' }
  let(:file_path) { "/tmp/my_file_output.txt" }

  it "dumps the SippyCup scenario to disk in SIPp format" do
    subject.write_stuff(content, file_path)

    File.exists?(file_path).should be_true
    output_file = File.read(file_path)
    output_file.should =~ /123/

The main caveat here is that any sample file used as input will have to be maintained and kept up to date, and you should always have some integration tests checking that the result that is expected from the operations involving the file data matches up with the application logic.

Handling HTTP requests

HTTP request testing shares many of the concerns with file operation specs, including having to maintain compatibility with formats that might change.

Usually, issues are made even worse by the fact that HTTP operations require a number of steps that involve blocks, resulting in having to create and mock a large number of expectations just to get the actual request to the code.

def do_query
  uri = URI.parse('http://example.com/some_path?query=string')

  Net::HTTP.start(uri.host, uri.port) do |http|
    response = http.request(Net::HTTP::Get.new(uri.request_uri))
describe "#do_query" do
  let(:http_response) { "OK" }

  it "queries the endpoint for data" do
    mock_uri = double('uri', host: 'example.com', port: '80', uri: 'http://example.com/some_path?query=string')
    mock_http = double('http')
    Net::HTTP.should_receive(:start).with(mock_uri.host, mock_uri.port).and_yield(mock_http)

    subject.do_query.should == http_response

There is a long list of issues with the above test. First of all, it tests next to nothing as the return values are all mocked, thus we would not be catching any actual problems.

Secondarily, it is very much tied to the implementation. Assume you find out that Net::HTTP is particularly slow with your particular application: switching that out for Curb, which is also a recommendation in itself, will result in you having to rewrite the test completely, while still achieving next to nothing in the pursuit of your goal.

A good solution is to not use any mocks at all, instead interacting with a mocked out API through a library such as Webmock.

Webmock’s goal is to stub out HTTP requests from the outside, allowing your code to be exercised in live conditions, provided you have a correct set of the API responses.

To mitigate the issues with having to maintain a copy of API requests in your specs, using VCR is recommended.

VCR will record a live HTTP session in a so-called cassette, that gets then replayed every time you run the specs. Different suites can use different cassettes, and they can be invalidated and reloaded by simply deleting the generated YAML or JSON files.

In addition, you can edit those request recordings to reflect changes.

The above can be rewritten in a much better way like the following examples.

# Using Webmock
describe "#do_query" do
  let(:http_response) { "OK" }

  it "queries the endpoint for data" do
    stub_request(:any, 'example.com').to_return(body: http_response)

    subject.do_query.should == http_response

# Using VCR - this will hit the URL the first time it is run
# In this case you need to assume the response to the API call is actually "OK"
describe "#do_query" do
  let(:http_response) { "OK" }

  it "queries the endpoint for data" do
    VCR.use_cassette('myspecs') do
      subject.do_query.should == http_response

Dealing with niche cases

On a recent project, we spent some time investigating how to test SSH requests. The goal was to SSH in to a machine, retrieve some statistics, and write them to disk.

After a few failed tries, we realized the best way would have been to simply not mock anything.

Since the project has a predictable development enviroment, it was possible to simply SSH in to that box, collect live stats and then check the resulting output.

That approach does not necessarily apply to your scenario, but it is worth considering over writing very deep, overly complex mocks that ultimately only test that rspec-mocks actually works.


Mocking is part of the bag of tricks for any resourceful TDD programmer, but it is also probably the most complex to apply properly.

We briefly reviewed some strategies and workarounds in this first post, and will be expanding on these and other methods such as strategy inclusion in future content.

Subscribe to our mailing list

* indicates required
I want to read about...
Email Format

One thought on “Tests and Mocks in RSpec

What do you think?