Multithreading in Go - A Tutorial

February 16th 2018 - Cory Finger

 (Updated 10/19/2018)

Multithreading in Go - A Tutorial

So when I heard about Go, the absolute first thing I wanted to try out was multithreading.

I know - Multithreading is one of the most painful things that a programmer has to deal with. It's just asking for race conditions and data corruption and all sorts of awful things. It's just horrible and should be avoided at all costs, until absolutely necessary. Then it should be left alone in the codebase in a corner that no one dares looks at.

It's not too hard in the Go programming language, it turns out. It's built into the language as a first-class feature.

It's actually really fun!

It may be helpful to check out the helper files that go along with this tutorial!

If you're absolutely new to Go, it might help out to take a look at my tutorial on Building a JSON API in Go!

If you're interested in topics like this, why not subscribe to the blog?


The built-in native that we're going to use is called a goroutine.

They look like this:

What's happening here?

In that example, the following code is put on a background thread to happen simultaneously with the rest of the code:

fmt.Println("This will happen at some unknown time")

This is extremely powerful because of the fact that it allows us to do multiple complex tasks in our code simultaneously. The speed gains for the overall program are amazing.

Why is that usually hard?

The problem with multithreading is that we can't predict when the code will actually execute. Some weird stuff can happen when lines of code don't execute in a known order.

Like, really weird stuff.

Let's take the example of a simple math problem:

What's the final output of that program?

If your results are anything like mine, this will be the output:

a = 4, b = 8

That's probably not what we'd expect to happen if we were reading this code and didn't know the intricacies of multithreading.

If this was normal, single-threaded code, this would be the order of operations:

a = 1
b = 2

b = a * b = 1 * 2 = 2
a = b * b = 2 * 2 = 4

a = 4, b = 2

But in this code, b = a * b will happen at an unpredictable time! You never know what the output will be!

Race Conditions

This happens because, sometimes, b = a * b executes after a = b * b. Other times, b = a * b happens first.

This problem is called a race condition. Where the output of the program changes based on what line of code executes first. The order of execution being uncontrollable. These problems are so hard to track down because they can disappear and reappear every time we run the program.

This race condition is currently making the order of operations:

a = 1
b = 2

a = b * b = 2 * 2 = 4
b = a * b = 2 * 4 = 8

a = 4, b = 8

That's a big problem!

That's the sort of problem that can make planes fall out of the sky and customer data suddenly disappear. We don't want that to happen in our production code!

Solution: Channels

I want to introduce the concept of channels. Channels allow for safe communication between different threads in a Go program.

Through good thread-to-thread communication, we can make sure our asynchronous code runs as fast as possible while making sure that we're still in control of the data flowing through it.

Let's solve that problem from earlier:

What does that do?

Let's break this down by each new line:

operationDone := make(chan bool)

Make a channel that consumes and outputs bool values.

go func() {
    b = a * b

    operationDone <- true

After the goroutine is finished, push a bool into the channel.


This is where the magic happens.

This tells the current (main) thread to wait for a message to be pushed into that channel.

When the message is pushed onto the channel, this line will pop the message out of the channel and the thread continues.

Due to our scheduling code, we can now get the intuitive output from this program. We can get it 100% of the time.

a = 4, b = 2

What can we do with this?

Oh… So many things…

If you've ever used Javascript, you've probably heard of a Promise. A Promise allows Javascript to execute asynchronous code and return a response.

This allows us to easily program code that needs to do multiple things simultaneously.

But when we have multiple Promise objects happening at once, organizing them can be a real pain. What if the computation you're doing relies on the output of multiple Promise objects? What if they need to interact?

Some people would give up and make them happen synchronously. But that's not the fun (see: blazing fast) way to do things!

Common Scheduling Tasks

One of my favorite Javascript libraries is called Bluebird. It allows us to schedule the interactions between multiple Promise objects. Coordinating them wherever needed while still maintaining performance.

Some utility methods Bluebird adds:

  • Race Promise objects against each other

  • Wait for some Promise objects in a list to finish

  • Wait for all Promise objects in a list to finish

  • Iterate through a list of Promise objects

A lot of the situations where we'd need to schedule asynchronous tasks are taken care of for us in an easy-to-use way.

But that's Javascript. This is Go.

Let's have some fun

Let's code a bunch of the methods that Bluebird provides. Let's also code a few methods that Bluebird doesn't provide. Let's do it all in the Go programming language.

If we can write a lot of different types of scheduled threading, through goroutines and channels, we'll know what's in our toolbox whenever we encounter a problem that might be able to make use of them.

Racing Threads - Improve our Latency

Users like fast applications. Reality gets in the way.

One of my favorite things to do is race pieces of code against each other. It all feels very cutthroat. Like we're telling the code "I don't care who answers the question. Just make it fast!"

Let's say we need to accomplish a task - Search the internet, in this case.

Everything starts out simple. We'll utilize the Google Search API to make the query and return it to the user:

In this example we're going to pretend that Google has some really bad network issues today and that responses can come back anywhere between 1 second and 10 seconds.

Let's go through this in pieces:

respond := make(chan string)

We make a channel for the asynchronous function to use when responding. This data type is safe to use to communicate between goroutines.

func main() {
    go googleIt(respond, query)

func googleIt(respond chan<- string, query string) {
    time.Sleep(time.Duration(rand.Intn(10)) * time.Second)

    respond <- "A Google Response"

We execute googleIt in a separate goroutine (thread). When googleIt executes, it will sleep for a random time (between 1-10 seconds) and then push the response (in this case a string) onto the channel.

googleIt takes a channel that accepts an input (chan<-).

There are three data types for channels:

  • chan means it can take input data or it can pull output data
  • chan<- means it can only be used to input data
  • <-chan means it can only be used to pull output data
queryResp := <-respond

fmt.Printf("Sent query:\t\t %s\n", query)
fmt.Printf("Got Response:\t\t %s\n", queryResp)

This part waits for respond to get some data pushed onto it. When it does get data pushed onto it, that data is put into the variable queryResp. Then queryResp can be treated like a normal string variable.

OK. Our search routine is running. It works and is filling customer queries. But our customers say it's not fast enough. They need faster results.

Let's give them faster results!

You've heard that Bing just upgraded its network and network requests to it only take 1-5 seconds.

But there's a chance that Google will be faster. In the worst-case scenario, Google can be slower. But in this average case scenario, Google could be faster.

We could monitor their network connections over a period of time and judge their relative stability. But that's all theoretical talk. We don't have time for it.

Let's do something about it right now:

This looks almost exactly the same, but something beautiful happens.

First, a slight difference:

The buffered channel

respond := make(chan string, 2)

Here we're making a buffered channel with 2 slots in the buffer.

The reason we're using a buffered channel instead of a regular channel is that a regular channel would block this action:

// This is bad
blockingChannel := make(chan string)
blockingChannel <- "First String"
blockingChannel <- "Second String"
firstData := <-blockingChannel
secondData := <-blockingChannel

// Code will never reach here
fmt.Println("The code is blocked from getting here")

This code fails (due to being paused) because, when data is pushed onto a channel, it blocks until there is a receiver to pull the data off it.

This will pause thread until the first piece of data is pulled off of the channel.

Here's an intuitive way to fix it:

// This works but is not ideal
blockingChannel := make(chan string)
go func() { blockingChannel <- "First String" }()
firstData := <-blockingChannel

go func() { blockingChannel <- "Second String" }()
secondData := <-blockingChannel

// Code will reach here now
fmt.Printf("No longer blocked.\n %s\n %s\n", firstData, secondData)

But that's a little hard to read. It also means that we can only write data to this channel as fast as we can read from it. These goroutines would be frozen until something tries to read data - which might not be exactly what we want (a lot of goroutines piling up).

A buffered channel will let us create a channel where multiple pieces of data can exist within it at once and lines that send data to it aren't blocked (until the channels are full):

// This works!
blockingChannel := make(chan string, 2)
blockingChannel <- "First String"
blockingChannel <- "Second String"
firstData := <-blockingChannel
secondData := <-blockingChannel

// Code will reach here now!
fmt.Printf("No longer blocked.\n %s\n %s\n", firstData, secondData)

We know that we'll be running 2 simultaneous tasks and so can create a buffer with 2 slots. So that we don't accidentally leak goroutines during the execution of our code.

The end result

queryResp := <-respond only waits for the first value to be pushed onto the channel. It doesn't care whether there is a second value or not.

That leads to this being a race between Google and Bing - It doesn't matter who's faster on average - Our customer will get the fastest response back they possibly can.

Yay for good customer service!

Timeouts - Improve our Worst Case Scenario

Let's build on this idea of racing threads.

We want to do a Google Search but don't want the user to be waiting for an overly long period of time. We want to set a maximum amount of time that we'll wait before giving up.

We introduce a couple new things here:

select statements

select {
  case queryResp := <-respond:
  case <-time.After(5 * time.Second):

What a select statement does is wait on multiple communication operations to complete.

In this case, it's going to wait our two channels and, whichever one returns first, it's going to execute.

I like to imagine it's a switch statement but for data streams.

The time.After utility function

case <-time.After(5 * time.Second):

This is just a function that generates a channel and, after the amount of time has passed, pushes a piece of data onto that channel.

If you were to program this yourself, with what you've learned, it'd look something like this:

timePassed := make(chan bool)

go func() {
  time.Sleep(2 * time.Second)
  timePassed <- true


The reason we used time.After here, instead of the channels that we've learned, is that it's the recommended best-practice in the Go community.

It's also really succinct and that's pretty cool.

Waiting on all goroutines to finish

Let's say we have a bunch of things that we want to happen simultaneously, but we don't want to do anything else until they've all finished.

This is the perfect time to introduce sync.WaitGroup:

There's a new data type here that we haven't seen before:

The sync.WaitGroup

This is a data type dedicated to blocking a thread until a certain number of actions have been completed.

var wg sync.WaitGroup

This initializes the WaitGroup and adds 5 to the number of tasks that the WaitGroup will wait for. Since it was initialized at 0, it brings the total number of tasks to 5.

func checkDNS(...) {
  defer wg.Done()

There are two things at play here:

defer is an interesting statement that says "Do this after this function returns."

So this:

func checkDNS(...) {
    defer wg.Done()

    time.Sleep(time.Duration(rand.Intn(10)) * time.Second)
    respond <- fmt.Sprintf("%s responded to query: %s", ns, query)

Actually does the same thing as this:

func checkDNS(...) {
    time.Sleep(time.Duration(rand.Intn(10)) * time.Second)
    respond <- fmt.Sprintf("%s responded to query: %s", ns, query)


The reason we use defer is that, as functions get more complicated, the return of the function can get more complicated. It's also easy to read in that we know, whatever happens, wg.Done() will be called when it finishes.

Lastly there's the question of what wg.Done() actually does. When we called wg.Add(5) earlier, it added 5 to the number of tasks that need to be completed. Whenever we call wg.Done(), it subtracts 1 from the number of tasks to be completed.

go checkDNS(respond, &wg, "", "")
go checkDNS(respond, &wg, "", "")
go checkDNS(respond, &wg, "", "")
go checkDNS(respond, &wg, "", "")
go checkDNS(respond, &wg, "", "")


When each of these checkDNS calls finishes, they will call wg.Done(). Each time they call wg.Done(), it will subtract 1 from the counter that we set (starting from 5).

wg.Wait() pauses the current thread until the counter of the WaitGroup hits 0. In this case, it will unpause the thread when all 5 checkDNS() calls are finished executing.


for queryResp := range respond {
    fmt.Printf("Got Response:\t %s\n", queryResp)

We close() the channel to indicate that no more values will be placed into it.

That allows us to iterate over it via range and get the 5 values that have been placed into it from our DNS check.

Waiting on some goroutines to finish

Now we can wait on a set number of goroutines to finish, which is great. But let's combine that with the Racing Threads we talked about earlier.

Because high latency is annoying. When our users use our tool they want to see results quickly.

When talking to nameservers, they sometimes disagree. Some are out of date, some are untrustworthy, some just have bugs or network issues. Let's say that, in our case, if we raced all of them and took the fastest result, we'd be opening ourselves up for a good number of accuracy issues.

We want a certain level of accuracy, but we also want to increase the overall speed of our application.

Let's change the above example to instead wait for the first 3 nameservers to finish and output their results. With 3 of them voting it'll drastically improve the accuracy but not let the slowest 2 hold us back.

Let's race all 5 of them simultaneously and let the fastest 3 win and be part of the vote.

Honestly, everything we've discussed in the previous sections makes this one easy to write and it comes without any surprises. Which is great!

Have fun!

That's it for this introduction to multithreading patterns in Go!

As we've seen from these examples, it's not that tough to create a safe and fast multithreaded application in the Go programming language.

Let me know in the comments below if there's any patterns you think should be added to this lists or any examples that could be improved!

Go create something awesome!