This is part 3 of my experience as a new user of Go, focusing on concurrency with Goroutines and channels.
For installation, testing, and packages, see Getting started with Go, and for pointers see Getting started with Go pointers.
The server below counts HTTP requests, and returns the latest count on each request.
To follow along, clone https://github.com/jldec/racey-go, and start the server with 'go run .'
package main
import (
"fmt"
"net/http"
)
func main() {
var count uint64 = 0
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
count++
fmt.Fprintln(w, count)
})
fmt.Println("Go listening on port 3000")
http.ListenAndServe(":3000", nil)
}
$ curl localhost:3000
1
$ curl localhost:3000
2
Let's try sending multiple requests at the same time. This command invokes curl with urls from a file using xargs to spawn 4 processes at once.
$ cat urls.txt | xargs -P 4 -n 1 curl
The file contains 100 lines, but instead of ending on a nice round number, on systems with more than 1 core you may see something like this (e.g. after 3 runs)
289
292
291
Replace the Go server with 'node server.js' to compare the results (e.g. after 3 runs again)
298
299
300
Now repeat the experiment with the race detector turned on. The detector will report a problem on line 12 of main.go which is count++
.
$ go run -race .
Go listening on port 3000
==================
WARNING: DATA RACE
Read at 0x00c000138280 by goroutine 7:
main.main.func1()
/Users/jleschner/pub/racey-go/main.go:12 +0x4a
net/http.HandlerFunc.ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51
net/http.(*ServeMux).ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf
net/http.serverHandler.ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca
net/http.(*conn).serve()
/Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d
Previous write at 0x00c000138280 by goroutine 9:
main.main.func1()
/Users/jleschner/pub/racey-go/main.go:12 +0x64
net/http.HandlerFunc.ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51
net/http.(*ServeMux).ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf
net/http.serverHandler.ServeHTTP()
/Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca
net/http.(*conn).serve()
/Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d
From the race detector docs:
A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write.
It's clear that 'count++' modifies the count, but what are goroutines and where are they in this case?
Goroutines provide low-overhead threading. They are easy to create and scale well on multi-core processors.
The Go runtime can schedule many concurrent goroutines across a small number of OS threads. Under the covers, this is how the http library handles concurrent web requests.
Let's start with an example. You can run it in the Go Playground.
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string)
// start 2 countdowns in parallel goroutines
go countdown("crew-1", ch)
go countdown("crew-2", ch)
fmt.Println(<-ch) // block waiting to receive 1st string
fmt.Println(<-ch) // block waiting to receive 2nd string
}
func countdown(name string, ch chan<- string) {
for i := 10; i > 0; i-- {
fmt.Println(name, i)
time.Sleep(1 * time.Second)
}
ch <- "blastoff " + name
}
Each 'go countdown()' starts a new goroutine. Notice how the countdowns are interleaved in the output.
...
crew-1 3
crew-2 3
crew-2 2
crew-1 2
crew-1 1
crew-2 1
blastoff crew-2
blastoff crew-1
Channels allow goroutines to communicate and coordinate.
In the example above, <-ch
(receive) will block until another goroutine uses ch <-
to send a string to the channel. This happens at the end of each countdown.
Sends will also block if there are no receivers, but that is not the case here.
There are many other variations for how to use channels, including buffered channels which only block sends when the buffer is full.
Given that net/http requests are handled by goroutines, can we explain why there is a data race when the function which handles a request increments a shared counter?
The reason is that count++
requires a read followed by write, and these are not automatically synchronized. One goroutine may overwrite the increment of another, resulting in lost writes.
To fix this, the counter has be protected to make the increment operation atomic.
github.com/jldec/counter-go demonstrates 3 different implementations of a threadsafe global counter.
atomic.AddUint64
and atomic.LoadUint64
.sync.RWMutex
.All 3 types implement a Counter interface:
type Counter interface {
Get() uint32 // get current counter value
Inc() // increment by 1
}
The modified server will work with any of the 3 implementations, and no data race should be detected.
package main
import (
"fmt"
"net/http"
counter "github.com/jldec/counter-go"
)
func main() {
count := new(counter.CounterAtomic)
// count := new(counter.CounterMutex)
// count := counter.NewCounterChannel()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
count.Inc()
fmt.Fprintln(w, count.Get())
})
fmt.Println("Go listening on port 3000")
http.ListenAndServe(":3000", nil)
}
Of the 3 implementations, CounterChannel is the most interesting. All access to the counter goes through 1 goroutine which uses a select to wait for either a read or a write on one of two channels.
Can you tell why neither Inc()
nor Get()
should block?
package counter
// Thread-safe counter
// Uses 2 Channels to coordinate reads and writes.
// Must be initialized with NewCounterChannel().
type CounterChannel struct {
readCh chan uint64
writeCh chan int
}
// NewCounterChannel() is required to initialize a Counter.
func NewCounterChannel() *CounterChannel {
c := &CounterChannel{
readCh: make(chan uint64),
writeCh: make(chan int),
}
// The actual counter value lives inside this goroutine.
// It can only be accessed for R/W via one of the channels.
go func() {
var count uint64 = 0
for {
select {
// Reading from readCh is equivalent to reading count.
case c.readCh <- count:
// Writing to the writeCh increments count.
case <-c.writeCh:
count++
}
}
}()
return c
}
// Increment counter by pushing an arbitrary int to the write channel.
func (c *CounterChannel) Inc() {
c.check()
c.writeCh <- 1
}
// Get current counter value from the read channel.
func (c *CounterChannel) Get() uint64 {
c.check()
return <-c.readCh
}
func (c *CounterChannel) check() {
if c.readCh == nil {
panic("Uninitialized Counter, requires NewCounterChannel()")
}
}
All 3 implementations are fast. Serializing everything through a goroutine with channels, costs only a few hundred ns for a single read or write. When constrained to a single OS thread, the cost of goroutines is even lower.
$ go test -bench .
goos: darwin
goarch: amd64
pkg: github.com/jldec/counter-go
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkCounter_1/Atomic-12 195965660 6 ns/op
BenchmarkCounter_1/Mutex-12 54177086 22 ns/op
BenchmarkCounter_1/Channel-12 4499144 286 ns/op
BenchmarkCounter_2/Atomic_no_reads-12 7298484 191 ns/op
BenchmarkCounter_2/Mutex_no_reads-12 1966656 621 ns/op
BenchmarkCounter_2/Channel_no_reads-12 256842 4771 ns/op
BenchmarkCounter_2/Atomic_10_reads-12 3922029 286 ns/op
BenchmarkCounter_2/Mutex_10_reads-12 416354 2844 ns/op
BenchmarkCounter_2/Channel_10_reads-12 21506 55733 ns/op
$ GOMAXPROCS=1 go test -bench .
BenchmarkCounter_1/Atomic 197135869 6 ns/op
BenchmarkCounter_1/Mutex 55698454 22 ns/op
BenchmarkCounter_1/Channel 5689788 214 ns/op
BenchmarkCounter_2/Atomic_no_reads 19519166 60 ns/op
BenchmarkCounter_2/Mutex_no_reads 4702759 254 ns/op
BenchmarkCounter_2/Channel_no_reads 530554 2197 ns/op
BenchmarkCounter_2/Atomic_10_reads 6269979 189 ns/op
BenchmarkCounter_2/Mutex_10_reads 927439 1354 ns/op
BenchmarkCounter_2/Channel_10_reads 47889 25054 ns/op
🚀 - code safe - 🚀
To leave a comment
please visit dev.to/jldec
{ "path": "/blog/getting-started-with-go-part-3-goroutines-and-channels", "attrs": { "title": "Getting started with Goroutines and channels", "splash": { "image": "/images/grape-hyacinth.jpg" }, "date": "2021-04-25", "layout": "BlogPostLayout", "excerpt": "Part 3 in my learning Go series, focusing on concurrency with Goroutines and channels." }, "md": "# Getting started with Goroutines and channels\n\nThis is part 3 of my experience as a new user of Go, focusing on concurrency with Goroutines and channels.\n\nFor installation, testing, and packages, see [Getting started with Go](getting-started-with-go), and for pointers see [Getting started with Go pointers](getting-started-with-go-part-2-pointers).\n\n## Counting HTTP requests\n\nThe [server](https://github.com/jldec/racey-go/blob/main/main.go) below counts HTTP requests, and returns the latest count on each request. \n\n_To follow along, clone https://github.com/jldec/racey-go, and start the server with 'go run .'_\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n)\n\nfunc main() {\n\tvar count uint64 = 0\n\n\thttp.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tcount++\n\t\tfmt.Fprintln(w, count)\n\t})\n\n\tfmt.Println(\"Go listening on port 3000\")\n\thttp.ListenAndServe(\":3000\", nil)\n}\n```\n\n```sh\n$ curl localhost:3000\n1\n$ curl localhost:3000\n2\n```\n\nLet's try sending multiple requests at the same time. This command invokes curl with urls from a file using xargs to spawn 4 processes at once.\n\n```sh\n$ cat urls.txt | xargs -P 4 -n 1 curl\n```\n\nThe [file](https://github.com/jldec/racey-go/blob/main/urls.txt) contains 100 lines, but instead of ending on a nice round number, on systems with more than 1 core you may see something like this (e.g. after 3 runs)\n\n```\n289\n292\n291\n```\n\nReplace the Go server with '[node server.js](https://github.com/jldec/racey-go/blob/main/server.js)' to compare the results (e.g. after 3 runs again)\n\n```\n298\n299\n300\n```\n\nNow repeat the experiment with the [race detector](https://golang.org/doc/articles/race_detector) turned on. The detector will report a problem on [line 12](https://github.com/jldec/racey-go/blob/main/main.go#L12) of main.go which is `count++`.\n\n```sh\n$ go run -race .\nGo listening on port 3000\n==================\nWARNING: DATA RACE\nRead at 0x00c000138280 by goroutine 7:\n main.main.func1()\n /Users/jleschner/pub/racey-go/main.go:12 +0x4a\n net/http.HandlerFunc.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51\n net/http.(*ServeMux).ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf\n net/http.serverHandler.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca\n net/http.(*conn).serve()\n /Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d\n\nPrevious write at 0x00c000138280 by goroutine 9:\n main.main.func1()\n /Users/jleschner/pub/racey-go/main.go:12 +0x64\n net/http.HandlerFunc.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51\n net/http.(*ServeMux).ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf\n net/http.serverHandler.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca\n net/http.(*conn).serve()\n /Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d\n```\n\n## Data races\n\nFrom the [race detector](https://golang.org/doc/articles/race_detector) docs:\n\n_A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write._\n\n> It's clear that 'count++' modifies the count, but what are goroutines and where are they in this case?\n\n## Goroutines\n\nGoroutines provide low-overhead threading. They are easy to create and scale well on multi-core processors.\n\nThe Go runtime can schedule many concurrent goroutines across a small number of OS threads. Under the covers, this is how the [http](https://golang.org/src/net/http/server.go#L3013) library handles concurrent web requests.\n\nLet's start with an example. You can run it in the [Go Playground](https://play.golang.org/p/HdH4UQEEXuU).\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"time\"\n)\n\nfunc main() {\n\tch := make(chan string)\n\n\t// start 2 countdowns in parallel goroutines\n\tgo countdown(\"crew-1\", ch)\n\tgo countdown(\"crew-2\", ch)\n\n\tfmt.Println(<-ch) // block waiting to receive 1st string\n\tfmt.Println(<-ch) // block waiting to receive 2nd string\n}\n\nfunc countdown(name string, ch chan<- string) {\n\tfor i := 10; i > 0; i-- {\n\t\tfmt.Println(name, i)\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\tch <- \"blastoff \" + name\n}\n```\n\nEach 'go countdown()' starts a new [goroutine](https://tour.golang.org/concurrency/1). Notice how the countdowns are interleaved in the output.\n\n```\n...\ncrew-1 3\ncrew-2 3\ncrew-2 2\ncrew-1 2\ncrew-1 1\ncrew-2 1\nblastoff crew-2\nblastoff crew-1\n```\n\n## Channels\n\n[Channels](https://tour.golang.org/concurrency/2) allow goroutines to communicate and coordinate.\n\nIn the example above, `<-ch` (receive) will block until another goroutine uses `ch <-` to send a string to the channel. This happens at the end of each countdown.\n\nSends will also block if there are no receivers, but that is not the case here.\n\nThere are many other variations for how to use channels, including [buffered channels](https://tour.golang.org/concurrency/3) which only block sends when the buffer is full.\n\n## Atomicity\n\nGiven that [net/http](https://pkg.go.dev/net/http) requests are handled by goroutines, can we explain why there is a data race when the function which handles a request increments a shared counter?\n\nThe reason is that `count++` requires a read followed by write, and these are not automatically synchronized. One goroutine may overwrite the increment of another, resulting in lost writes.\n\nTo fix this, the counter has be protected to make the increment operation atomic.\n\n## Counter-go\n\n[github.com/jldec/counter-go](https://github.com/jldec/counter-go) demonstrates 3 different implementations of a threadsafe global counter.\n\n1. **CounterAtomic** uses `atomic.AddUint64` and `atomic.LoadUint64`.\n2. **CounterMutex** uses `sync.RWMutex`.\n3. **CounterChannel** serializes all reads and writes inside 1 goroutine with 2 channels.\n\nAll 3 types implement a Counter interface:\n\n```go\ntype Counter interface {\n Get() uint32 // get current counter value\n Inc() // increment by 1\n}\n```\n\nThe [modified server](https://github.com/jldec/racey-go/blob/fix-with-counter-go/main.go) will work with any of the 3 implementations, and no data race should be detected.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\tcounter \"github.com/jldec/counter-go\"\n)\n\nfunc main() {\n\tcount := new(counter.CounterAtomic)\n\t// count := new(counter.CounterMutex)\n\t// count := counter.NewCounterChannel()\n\n\thttp.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tcount.Inc()\n\t\tfmt.Fprintln(w, count.Get())\n\t})\n\n\tfmt.Println(\"Go listening on port 3000\")\n\thttp.ListenAndServe(\":3000\", nil)\n}\n```\n\n### Coordination with channels\n\nOf the 3 implementations, [CounterChannel](https://github.com/jldec/counter-go/blob/main/counter_channel.go) is the most interesting. All access to the counter goes through 1 goroutine which uses a [select](https://tour.golang.org/concurrency/5) to wait for either a read or a write on one of two channels.\n\nCan you tell why neither `Inc()` nor `Get()` should block?\n\n```go\n\npackage counter\n\n// Thread-safe counter\n// Uses 2 Channels to coordinate reads and writes.\n// Must be initialized with NewCounterChannel().\ntype CounterChannel struct {\n\treadCh chan uint64\n\twriteCh chan int\n}\n\n// NewCounterChannel() is required to initialize a Counter.\nfunc NewCounterChannel() *CounterChannel {\n\tc := &CounterChannel{\n\t\treadCh: make(chan uint64),\n\t\twriteCh: make(chan int),\n\t}\n\n\t// The actual counter value lives inside this goroutine.\n\t// It can only be accessed for R/W via one of the channels.\n\tgo func() {\n\t\tvar count uint64 = 0\n\t\tfor {\n\t\t\tselect {\n\t\t\t// Reading from readCh is equivalent to reading count.\n\t\t\tcase c.readCh <- count:\n\t\t\t// Writing to the writeCh increments count.\n\t\t\tcase <-c.writeCh:\n\t\t\t\tcount++\n\t\t\t}\n\t\t}\n\t}()\n\n\treturn c\n}\n\n// Increment counter by pushing an arbitrary int to the write channel.\nfunc (c *CounterChannel) Inc() {\n\tc.check()\n\tc.writeCh <- 1\n}\n\n// Get current counter value from the read channel.\nfunc (c *CounterChannel) Get() uint64 {\n\tc.check()\n\treturn <-c.readCh\n}\n\nfunc (c *CounterChannel) check() {\n\tif c.readCh == nil {\n\t\tpanic(\"Uninitialized Counter, requires NewCounterChannel()\")\n\t}\n}\n```\n\n### Benchmarks\n\nAll 3 [implementations](https://github.com/jldec/counter-go) are fast. Serializing everything through a goroutine with channels, costs only a few hundred ns for a single read or write. When constrained to a single OS thread, the cost of goroutines is even lower.\n\n```sh\n$ go test -bench .\ngoos: darwin\ngoarch: amd64\npkg: github.com/jldec/counter-go\ncpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz\n```\n\n#### Simple: 1 op = 1 Inc() in same thread\n```sh\nBenchmarkCounter_1/Atomic-12 195965660 6 ns/op\nBenchmarkCounter_1/Mutex-12 54177086 22 ns/op\nBenchmarkCounter_1/Channel-12 4499144 286 ns/op\n```\n\n#### Concurrent: 1 op = 1 Inc() across each of 10 goroutines\n```sh\nBenchmarkCounter_2/Atomic_no_reads-12 7298484 191 ns/op\nBenchmarkCounter_2/Mutex_no_reads-12 1966656 621 ns/op\nBenchmarkCounter_2/Channel_no_reads-12 256842 4771 ns/op\n```\n\n#### Concurrent: 1 op = [ 1 Inc() + 10 Get() ] across each of 10 goroutines\n```sh\nBenchmarkCounter_2/Atomic_10_reads-12 3922029 286 ns/op\nBenchmarkCounter_2/Mutex_10_reads-12 416354 2844 ns/op\nBenchmarkCounter_2/Channel_10_reads-12 21506 55733 ns/op\n```\n\n#### Constrained to single thread\n```sh\n$ GOMAXPROCS=1 go test -bench .\n\nBenchmarkCounter_1/Atomic 197135869 6 ns/op\nBenchmarkCounter_1/Mutex 55698454 22 ns/op\nBenchmarkCounter_1/Channel 5689788 214 ns/op\n\nBenchmarkCounter_2/Atomic_no_reads 19519166 60 ns/op\nBenchmarkCounter_2/Mutex_no_reads 4702759 254 ns/op\nBenchmarkCounter_2/Channel_no_reads 530554 2197 ns/op\n\nBenchmarkCounter_2/Atomic_10_reads 6269979 189 ns/op\nBenchmarkCounter_2/Mutex_10_reads 927439 1354 ns/op\nBenchmarkCounter_2/Channel_10_reads 47889 25054 ns/op\n```\n\n\n> 🚀 - code safe - 🚀\n\n_To leave a comment \nplease visit [dev.to/jldec](https://dev.to/jldec/getting-started-with-goroutines-and-channels-fc6)_\n", "html": "<h1>Getting started with Goroutines and channels</h1>\n<p>This is part 3 of my experience as a new user of Go, focusing on concurrency with Goroutines and channels.</p>\n<p>For installation, testing, and packages, see <a href=\"getting-started-with-go\">Getting started with Go</a>, and for pointers see <a href=\"getting-started-with-go-part-2-pointers\">Getting started with Go pointers</a>.</p>\n<h2>Counting HTTP requests</h2>\n<p>The <a href=\"https://github.com/jldec/racey-go/blob/main/main.go\">server</a> below counts HTTP requests, and returns the latest count on each request.</p>\n<p><em>To follow along, clone <a href=\"https://github.com/jldec/racey-go\">https://github.com/jldec/racey-go</a>, and start the server with 'go run .'</em></p>\n<pre><code class=\"language-go\">package main\n\nimport (\n\t"fmt"\n\t"net/http"\n)\n\nfunc main() {\n\tvar count uint64 = 0\n\n\thttp.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {\n\t\tcount++\n\t\tfmt.Fprintln(w, count)\n\t})\n\n\tfmt.Println("Go listening on port 3000")\n\thttp.ListenAndServe(":3000", nil)\n}\n</code></pre>\n<pre><code class=\"language-sh\">$ curl localhost:3000\n1\n$ curl localhost:3000\n2\n</code></pre>\n<p>Let's try sending multiple requests at the same time. This command invokes curl with urls from a file using xargs to spawn 4 processes at once.</p>\n<pre><code class=\"language-sh\">$ cat urls.txt | xargs -P 4 -n 1 curl\n</code></pre>\n<p>The <a href=\"https://github.com/jldec/racey-go/blob/main/urls.txt\">file</a> contains 100 lines, but instead of ending on a nice round number, on systems with more than 1 core you may see something like this (e.g. after 3 runs)</p>\n<pre><code>289\n292\n291\n</code></pre>\n<p>Replace the Go server with '<a href=\"https://github.com/jldec/racey-go/blob/main/server.js\">node server.js</a>' to compare the results (e.g. after 3 runs again)</p>\n<pre><code>298\n299\n300\n</code></pre>\n<p>Now repeat the experiment with the <a href=\"https://golang.org/doc/articles/race_detector\">race detector</a> turned on. The detector will report a problem on <a href=\"https://github.com/jldec/racey-go/blob/main/main.go#L12\">line 12</a> of main.go which is <code>count++</code>.</p>\n<pre><code class=\"language-sh\">$ go run -race .\nGo listening on port 3000\n==================\nWARNING: DATA RACE\nRead at 0x00c000138280 by goroutine 7:\n main.main.func1()\n /Users/jleschner/pub/racey-go/main.go:12 +0x4a\n net/http.HandlerFunc.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51\n net/http.(*ServeMux).ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf\n net/http.serverHandler.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca\n net/http.(*conn).serve()\n /Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d\n\nPrevious write at 0x00c000138280 by goroutine 9:\n main.main.func1()\n /Users/jleschner/pub/racey-go/main.go:12 +0x64\n net/http.HandlerFunc.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2069 +0x51\n net/http.(*ServeMux).ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2448 +0xaf\n net/http.serverHandler.ServeHTTP()\n /Users/jleschner/go1.16.3/src/net/http/server.go:2887 +0xca\n net/http.(*conn).serve()\n /Users/jleschner/go1.16.3/src/net/http/server.go:1952 +0x87d\n</code></pre>\n<h2>Data races</h2>\n<p>From the <a href=\"https://golang.org/doc/articles/race_detector\">race detector</a> docs:</p>\n<p><em>A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write.</em></p>\n<blockquote>\n<p>It's clear that 'count++' modifies the count, but what are goroutines and where are they in this case?</p>\n</blockquote>\n<h2>Goroutines</h2>\n<p>Goroutines provide low-overhead threading. They are easy to create and scale well on multi-core processors.</p>\n<p>The Go runtime can schedule many concurrent goroutines across a small number of OS threads. Under the covers, this is how the <a href=\"https://golang.org/src/net/http/server.go#L3013\">http</a> library handles concurrent web requests.</p>\n<p>Let's start with an example. You can run it in the <a href=\"https://play.golang.org/p/HdH4UQEEXuU\">Go Playground</a>.</p>\n<pre><code class=\"language-go\">package main\n\nimport (\n\t"fmt"\n\t"time"\n)\n\nfunc main() {\n\tch := make(chan string)\n\n\t// start 2 countdowns in parallel goroutines\n\tgo countdown("crew-1", ch)\n\tgo countdown("crew-2", ch)\n\n\tfmt.Println(<-ch) // block waiting to receive 1st string\n\tfmt.Println(<-ch) // block waiting to receive 2nd string\n}\n\nfunc countdown(name string, ch chan<- string) {\n\tfor i := 10; i > 0; i-- {\n\t\tfmt.Println(name, i)\n\t\ttime.Sleep(1 * time.Second)\n\t}\n\tch <- "blastoff " + name\n}\n</code></pre>\n<p>Each 'go countdown()' starts a new <a href=\"https://tour.golang.org/concurrency/1\">goroutine</a>. Notice how the countdowns are interleaved in the output.</p>\n<pre><code>...\ncrew-1 3\ncrew-2 3\ncrew-2 2\ncrew-1 2\ncrew-1 1\ncrew-2 1\nblastoff crew-2\nblastoff crew-1\n</code></pre>\n<h2>Channels</h2>\n<p><a href=\"https://tour.golang.org/concurrency/2\">Channels</a> allow goroutines to communicate and coordinate.</p>\n<p>In the example above, <code><-ch</code> (receive) will block until another goroutine uses <code>ch <-</code> to send a string to the channel. This happens at the end of each countdown.</p>\n<p>Sends will also block if there are no receivers, but that is not the case here.</p>\n<p>There are many other variations for how to use channels, including <a href=\"https://tour.golang.org/concurrency/3\">buffered channels</a> which only block sends when the buffer is full.</p>\n<h2>Atomicity</h2>\n<p>Given that <a href=\"https://pkg.go.dev/net/http\">net/http</a> requests are handled by goroutines, can we explain why there is a data race when the function which handles a request increments a shared counter?</p>\n<p>The reason is that <code>count++</code> requires a read followed by write, and these are not automatically synchronized. One goroutine may overwrite the increment of another, resulting in lost writes.</p>\n<p>To fix this, the counter has be protected to make the increment operation atomic.</p>\n<h2>Counter-go</h2>\n<p><a href=\"https://github.com/jldec/counter-go\">github.com/jldec/counter-go</a> demonstrates 3 different implementations of a threadsafe global counter.</p>\n<ol>\n<li><strong>CounterAtomic</strong> uses <code>atomic.AddUint64</code> and <code>atomic.LoadUint64</code>.</li>\n<li><strong>CounterMutex</strong> uses <code>sync.RWMutex</code>.</li>\n<li><strong>CounterChannel</strong> serializes all reads and writes inside 1 goroutine with 2 channels.</li>\n</ol>\n<p>All 3 types implement a Counter interface:</p>\n<pre><code class=\"language-go\">type Counter interface {\n Get() uint32 // get current counter value\n Inc() // increment by 1\n}\n</code></pre>\n<p>The <a href=\"https://github.com/jldec/racey-go/blob/fix-with-counter-go/main.go\">modified server</a> will work with any of the 3 implementations, and no data race should be detected.</p>\n<pre><code class=\"language-go\">package main\n\nimport (\n\t"fmt"\n\t"net/http"\n\n\tcounter "github.com/jldec/counter-go"\n)\n\nfunc main() {\n\tcount := new(counter.CounterAtomic)\n\t// count := new(counter.CounterMutex)\n\t// count := counter.NewCounterChannel()\n\n\thttp.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {\n\t\tcount.Inc()\n\t\tfmt.Fprintln(w, count.Get())\n\t})\n\n\tfmt.Println("Go listening on port 3000")\n\thttp.ListenAndServe(":3000", nil)\n}\n</code></pre>\n<h3>Coordination with channels</h3>\n<p>Of the 3 implementations, <a href=\"https://github.com/jldec/counter-go/blob/main/counter_channel.go\">CounterChannel</a> is the most interesting. All access to the counter goes through 1 goroutine which uses a <a href=\"https://tour.golang.org/concurrency/5\">select</a> to wait for either a read or a write on one of two channels.</p>\n<p>Can you tell why neither <code>Inc()</code> nor <code>Get()</code> should block?</p>\n<pre><code class=\"language-go\">\npackage counter\n\n// Thread-safe counter\n// Uses 2 Channels to coordinate reads and writes.\n// Must be initialized with NewCounterChannel().\ntype CounterChannel struct {\n\treadCh chan uint64\n\twriteCh chan int\n}\n\n// NewCounterChannel() is required to initialize a Counter.\nfunc NewCounterChannel() *CounterChannel {\n\tc := &CounterChannel{\n\t\treadCh: make(chan uint64),\n\t\twriteCh: make(chan int),\n\t}\n\n\t// The actual counter value lives inside this goroutine.\n\t// It can only be accessed for R/W via one of the channels.\n\tgo func() {\n\t\tvar count uint64 = 0\n\t\tfor {\n\t\t\tselect {\n\t\t\t// Reading from readCh is equivalent to reading count.\n\t\t\tcase c.readCh <- count:\n\t\t\t// Writing to the writeCh increments count.\n\t\t\tcase <-c.writeCh:\n\t\t\t\tcount++\n\t\t\t}\n\t\t}\n\t}()\n\n\treturn c\n}\n\n// Increment counter by pushing an arbitrary int to the write channel.\nfunc (c *CounterChannel) Inc() {\n\tc.check()\n\tc.writeCh <- 1\n}\n\n// Get current counter value from the read channel.\nfunc (c *CounterChannel) Get() uint64 {\n\tc.check()\n\treturn <-c.readCh\n}\n\nfunc (c *CounterChannel) check() {\n\tif c.readCh == nil {\n\t\tpanic("Uninitialized Counter, requires NewCounterChannel()")\n\t}\n}\n</code></pre>\n<h3>Benchmarks</h3>\n<p>All 3 <a href=\"https://github.com/jldec/counter-go\">implementations</a> are fast. Serializing everything through a goroutine with channels, costs only a few hundred ns for a single read or write. When constrained to a single OS thread, the cost of goroutines is even lower.</p>\n<pre><code class=\"language-sh\">$ go test -bench .\ngoos: darwin\ngoarch: amd64\npkg: github.com/jldec/counter-go\ncpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz\n</code></pre>\n<h4>Simple: 1 op = 1 Inc() in same thread</h4>\n<pre><code class=\"language-sh\">BenchmarkCounter_1/Atomic-12 195965660 6 ns/op\nBenchmarkCounter_1/Mutex-12 54177086 22 ns/op\nBenchmarkCounter_1/Channel-12 4499144 286 ns/op\n</code></pre>\n<h4>Concurrent: 1 op = 1 Inc() across each of 10 goroutines</h4>\n<pre><code class=\"language-sh\">BenchmarkCounter_2/Atomic_no_reads-12 7298484 191 ns/op\nBenchmarkCounter_2/Mutex_no_reads-12 1966656 621 ns/op\nBenchmarkCounter_2/Channel_no_reads-12 256842 4771 ns/op\n</code></pre>\n<h4>Concurrent: 1 op = [ 1 Inc() + 10 Get() ] across each of 10 goroutines</h4>\n<pre><code class=\"language-sh\">BenchmarkCounter_2/Atomic_10_reads-12 3922029 286 ns/op\nBenchmarkCounter_2/Mutex_10_reads-12 416354 2844 ns/op\nBenchmarkCounter_2/Channel_10_reads-12 21506 55733 ns/op\n</code></pre>\n<h4>Constrained to single thread</h4>\n<pre><code class=\"language-sh\">$ GOMAXPROCS=1 go test -bench .\n\nBenchmarkCounter_1/Atomic 197135869 6 ns/op\nBenchmarkCounter_1/Mutex 55698454 22 ns/op\nBenchmarkCounter_1/Channel 5689788 214 ns/op\n\nBenchmarkCounter_2/Atomic_no_reads 19519166 60 ns/op\nBenchmarkCounter_2/Mutex_no_reads 4702759 254 ns/op\nBenchmarkCounter_2/Channel_no_reads 530554 2197 ns/op\n\nBenchmarkCounter_2/Atomic_10_reads 6269979 189 ns/op\nBenchmarkCounter_2/Mutex_10_reads 927439 1354 ns/op\nBenchmarkCounter_2/Channel_10_reads 47889 25054 ns/op\n</code></pre>\n<blockquote>\n<p>🚀 - code safe - 🚀</p>\n</blockquote>\n<p><em>To leave a comment<br>\nplease visit <a href=\"https://dev.to/jldec/getting-started-with-goroutines-and-channels-fc6\">dev.to/jldec</a></em></p>\n" }