Work in several GitHub repositories (repos) brings inconveniences. Repos are separated from each other, open and closed issues live in different lists, linked pull requests (PRs) can't be seen until opening an issue — our team works in more than 70 repos, so I learned that hard. And started to write repos tracker on Python. It's close to the final, I'd like to share it (it's completely free), and few tricks I used in progress.
What is that
Scraper tracks several GitHub repos in a single Google Sheet (GS) table. You can see all of the opened and done issues, related PRs, priorities, your teammates comments, use coloring, filtering, sorting and other GS functions. That's a good integration, here is how it looks:
How does it work, shortly
There is Spreadsheet()
class which contain several Sheet()
objects, each of which have it's own configurations, described in config.py
. When Scraper updates a sheet, it loads configurations, sees list of repos to track, requests info from GitHub, builds a table and sends it to GS service. Sounds easy, but there were several tough to deal with things, which I've solved mostly with support of my work experience in Google projects, and which I consider as good patterns. Take a pen.