cal poly slo
train track geometry: https://en.wikipedia.org/wiki/Track_geometry curvature: https://en.wikipedia.org/wiki/Track_geometry#Curvature racing line: https://en.wikipedia.org/wiki/Racing_line racing line physics tutorial: https://www.youtube.com/watch?v=N8qBdOs0s1E
ops s3 bucket data file structure:
- build/ - 511.json - 512.json - 513.json - 514.json - 515.json - deploy/ - preview/ - 512.json - 513.json - 514.json - production/ - 511.json - 515.json
- 511/ - build.json - production-deploy.json - 512/ - build.json - preview-deploy.json - 513/ - build.json - preview-deploy.json - 514/ - build.json - preview-deploy.json - 515/ - build.json - production-deploy.json
my hunch is that option 1 matches the final ops website structure. there will be some linking between build types, but that is secondary to the core structure which puts event type first. build type second. this will also allow the different builds to drop their file into s3 more independently of the other build types. (i know, s3 is not actually a filesystem, you don’t need to
mkdir -p before you can put a file in a prefix).
i still want to get the timing markers into the build, and deploy data. in place edit of the data file, json/yaml, whichever.
yq might be a more convenient choice than
jq. since yaml and json are interchangable inputs for
hugo, i should go with whichever tooling i like better.
yq can also render json.
yq cli installed in travis build
aws cli installed in travis build
deploy via aws cli to preview site, in my bin/ scripts, using the PR sha in the preview url. use that url in the PR comment. this url will give distinct urls per build, making multiple preview link comments in a PR thread each individually usable. with a single url per PR as currently done, the earlier PR comments are misleading as those deploys over overwritten by the newer ones.
[x] get sha into preview url
[x] assemble build data payload with build timers
[x] push build data in current travis build to ops data bucket
[x] assemble prod deploy data payload with build timers
[x] push production-deploy data in current travis build to ops data bucket
[x] assemble preview deploy data payload with build timers
[x] push preview-deploy data in current travis build to ops data bucket
status page appears to be redirecting to stats.uptimerobot.com but i’m seeing a unrendered template in plaintext. tried some googling but didn’t find anything. certificate appears happy in the browser.
hugo new site ops-website cd !$ aws --profile personal s3 sync s3://laps.run-ops-data-private/ data/ tree data/ data ├── build │ ├── 0.yml │ ├── 608.yml │ ├── 609.yml │ ├── 610.yml │ ├── 611.yml │ └── 612.yml └── deploy ├── preview │ ├── 608.yml │ ├── 609.yml │ ├── 610.yml │ ├── 611.yml │ └── 612.yml └── production └── 612.yml
I think we probably want to load the entire contents of the bucket. then, generate the content/ subdirectories and pages based on those files. most likely putting each file content to the frontmatter of each file.
actually, let’s skip the middleman. we can sync the contents of the bucket down to the content/ directory. I should decide whether it matters for preview and production should be nested under deploy/, or to flatten it.
status page completed: https://status.laps.run. final config:
cloudflare dns: CNAME status -> stats.uptimerobot.com
cloudflare page rules:
http://*laps.run/* - always use https
*status.laps.run/* - full ssl
i disabled squash merges to master going forward. i think it confuses the lineage of the project. Especially now that i’m capturing build and preview deploy data for pre-merge commits, i think it will be helpful to not rewrite pre-marge history for linking back to this data. otherwise with a squash the link is obscured. I think i’d rather have catchup commits in branches, even if it makes the lineage untidy. But there is build data associated with those catchup commits that i care about.
this means i should also be careful of force pushing feature branches, if i care about the above.
i think it should be fairly straightforward to move the laps.run terraform out of my main tf repo and open source it.
first test upload of ops site:
cd ops-website aws --profile personal s3 sync s3://laps.run-ops-data-private/ content/ hugo aws --profile personal s3 sync public s3://ops.laps.run/ open https://ops.laps.run
move preview build data to new flat prefix:
aws --profile personal s3 cp s3://laps.run-ops-data-private/deploy/preview/ s3://laps.run-ops-data-private/deploy-preview/ --recursive aws --profile personal s3 rm s3://laps.run-ops-data-private/deploy/preview/ --recursive
same for production:
aws --profile personal s3 cp s3://laps.run-ops-data-private/deploy/production/ s3://laps.run-ops-data-private/deploy-production/ --recursive aws --profile personal s3 rm s3://laps.run-ops-data-private/deploy/production/ --recursive
next: iterate on ops website transforms and layouts