Executing Functions on Repeating Time Intervals in Go
I recently added a small package to my go-kit that provides the ability to execute a function after repeated time intervals. The package provides the ability for the time interval to be dynamic, meaning the delay is determined after each execution, or on a fixed interval.
As always, the most up-to-date source code is available on GitHub but I'd like to go over it briefly here.
Here's what the job package looks like:
// Package job provides the ability to execute timed jobs in their own goroutine. package job import "time" type Job interface { // Run is called when the job is triggered. Run() // SleepTime returns the amount of time to sleep before running // the job again. SleepTime() time.Duration } // RegisterJob schedules a job for execution func RegisterJob(j Job) { go func(j Job) { for { j.Run() time.Sleep(j.SleepTime()) } }(j) }
Pretty straightforward: basically the package exposes a Job interface that can be implemented, and then passed to RegisterJob to be executed on an interval. The contents of this function basically start a new goroutine, which contains an infinite loop. Within the loop, the Run method of the Job is executed, and then the goroutine sleeps for the amount of time returned by the SleepTime function of the Job.
Let's look at a straightforward example:
import "github.com/KyleBanks/go-kit/job" type MyJob struct { } func (MyJob) Run() { fmt.Println("Running...") } func (MyJob) SleepTime() time.Duration { return time.Minute * 2 } job.RegisterJob(&MyJob{})
The MyJob example simply prints "Running..." every two minutes, but it could do anything. For example, send an email newsletter to users, dump metrics to a third-party API, clean caches, etc.
SleepTime can also return a different amount of time after each execution, allowing you the flexibility to modify job scheduling on-the-fly. For example, you may want to run your job once every hour during peak hours, and once every two minutes during downtime.
This package is very simple, and it was intentionally designed to be that way, but I would eventually like to extend it to support the ability to run in a cluster, and have only a specified number of instances in the cluster execute. Given the newsletter example above, we may only want to have one instance execute the job to prevent users from being flooded with emails.
Contributions are always welcome, as always, via GitHub, and let me know if you end up using the job package for anything interesting!