Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: autometrics-dev/autometrics-py
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0.7
Choose a base ref
...
head repository: autometrics-dev/autometrics-py
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref

Commits on Jul 24, 2023

  1. Bump dependencies, use caret requirement for prometheus-client (#72)

    * Bump dependencies, use caret requirement for prometheus-client
    
    * Support prometheus-client 16 or 17
    
    * Added changelog
    
    * Bump package version to 0.8
    
    * Mod changelog again
    
    ---------
    
    Co-authored-by: Brett Beutell <brett@fiberplane.com>
    actualwitch and Brett Beutell authored Jul 24, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    90e9ed4 View commit details

Commits on Aug 9, 2023

  1. Rename metrics

    あで committed Aug 9, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    d16204b View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    7b6e180 View commit details
  3. Add caller module

    あで committed Aug 9, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    1b2335c View commit details
  4. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    38379b3 View commit details
  5. Add service.name

    あで committed Aug 9, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    acacec4 View commit details

Commits on Aug 14, 2023

  1. Merge pull request #76 from autometrics-dev/add-service-name

    Add service.name
    actualwitch authored Aug 14, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    44ba726 View commit details

Commits on Aug 16, 2023

  1. Expose metrics and other improvements

    あで committed Aug 16, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    8b9c05e View commit details
  2. Merge pull request #77 from autometrics-dev/expose-metrics

    Expose metrics and other improvements
    actualwitch authored Aug 16, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    5fdb19b View commit details

Commits on Aug 21, 2023

  1. Base implementation

    For record_error_if and record_success_if
    flenter committed Aug 21, 2023
    Copy the full SHA
    38711e6 View commit details
  2. Update pyright

    flenter committed Aug 21, 2023
    Copy the full SHA
    a17255d View commit details
  3. Update typings & initial test

    flenter committed Aug 21, 2023
    Copy the full SHA
    abbe50c View commit details
  4. Format code

    flenter committed Aug 21, 2023
    Copy the full SHA
    4723824 View commit details
  5. Copy the full SHA
    6cca5a4 View commit details
  6. Fix build

    flenter committed Aug 21, 2023
    Copy the full SHA
    0f643c7 View commit details
  7. Switch to mypy

    flenter committed Aug 21, 2023
    Copy the full SHA
    a38f15e View commit details
  8. Copy the full SHA
    e48b11d View commit details
  9. Install examples

    flenter committed Aug 21, 2023
    Copy the full SHA
    d13cbde View commit details
  10. Add py.typed to src

    flenter committed Aug 21, 2023
    Copy the full SHA
    fe8205a View commit details
  11. use --all-extras

    flenter committed Aug 21, 2023
    Copy the full SHA
    2a07de8 View commit details
  12. Use with dev/examples

    flenter committed Aug 21, 2023
    Copy the full SHA
    685fa1d View commit details
  13. Add mypy config to pyproject

    flenter committed Aug 21, 2023
    Copy the full SHA
    f3cddd2 View commit details
  14. add fix for opentelemetry

    flenter committed Aug 21, 2023
    Copy the full SHA
    b5b82f8 View commit details

Commits on Aug 23, 2023

  1. Prepare release 0.9

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    3b1e12e View commit details
  2. Test run release workflow

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    83b92e4 View commit details
  3. Fix action path

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    00df201 View commit details
  4. Force workflow run

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    5c48bbd View commit details
  5. Fix release action

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    962f248 View commit details
  6. Generate correct changelog

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    7dcf5dc View commit details
  7. Fix version action

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    2613839 View commit details
  8. Fix version action?

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    9c1a58e View commit details
  9. Switch to output context

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    9c1a463 View commit details
  10. Fix env name

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    0d22bd7 View commit details
  11. Test workflow

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    fc45c77 View commit details
  12. Cleanup

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    ae174d0 View commit details
  13. Mini fix

    あで committed Aug 23, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    28f0912 View commit details

Commits on Aug 24, 2023

  1. Some fixes

    あで committed Aug 24, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    1ab501e View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    c971e10 View commit details
  3. Fix automation

    あで committed Aug 24, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    d199b8a View commit details
  4. Merge pull request #81 from autometrics-dev/release/0.9

    Fix automation
    actualwitch authored Aug 24, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    4157e97 View commit details
  5. Switch to using workflow dispatch

    あで committed Aug 24, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    ce966de View commit details
  6. Merge pull request #82 from autometrics-dev/workflow-dispatch

    Switch to using workflow dispatch
    actualwitch authored Aug 24, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    4c29e5b View commit details
  7. Move twine to dev deps

    あで committed Aug 24, 2023

    Unverified

    No user is associated with the committer email.
    Copy the full SHA
    fa755e3 View commit details
  8. Merge pull request #83 from autometrics-dev/move-twine-to-dev-group

    Move twine to dev deps
    actualwitch authored Aug 24, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    0a33f8b View commit details

Commits on Aug 28, 2023

  1. Fix build with pypy (and python 3.10)

    This is done by using the "follow_imports=skip" option for opentelemetry.attributes
    flenter committed Aug 28, 2023
    Copy the full SHA
    c4e8d33 View commit details
  2. Copy the full SHA
    407ebac View commit details
  3. Update changelog

    flenter committed Aug 28, 2023
    Copy the full SHA
    7d7ad1c View commit details
  4. Copy the full SHA
    b732db5 View commit details
  5. Feedback on pr

    flenter committed Aug 28, 2023
    Copy the full SHA
    575f13d View commit details
  6. Merge pull request #79 from autometrics-dev/resolve_45_add_ok_error_l…

    …ogic
    
    Add ok/error logic to the decorator
    flenter authored Aug 28, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    87eb542 View commit details
Showing with 5,242 additions and 1,232 deletions.
  1. +4 −0 .github/actions/get-release-version/action.yml
  2. +16 −0 .github/actions/get-release-version/get-release-version.js
  3. +14 −0 .github/pull_request_template.md
  4. +15 −7 .github/workflows/main.yml
  5. +41 −0 .github/workflows/release.yml
  6. +54 −0 CHANGELOG.md
  7. +0 −23 LICENSE
  8. +176 −0 LICENSE-APACHE
  9. +21 −0 LICENSE-MIT
  10. +227 −52 README.md
  11. +10 −0 Tiltfile
  12. +44 −0 configs/compose/examples.yaml
  13. +48 −0 configs/compose/infra.yaml
  14. +17 −0 configs/docker/base.Dockerfile
  15. +4 −0 configs/grafana/config.ini
  16. +392 −0 configs/grafana/dashboards/Autometrics Function Explorer.json
  17. +356 −0 configs/grafana/dashboards/Autometrics Overview.json
  18. +712 −0 configs/grafana/dashboards/Autometrics Service Level Objectives (SLOs).json
  19. +24 −0 configs/grafana/provisioning/dashboards/dashboards.yml
  20. +12 −0 configs/grafana/provisioning/datasources/datasource.yml
  21. +23 −0 configs/otel-collector-config.yaml
  22. +3 −0 docker-compose.yaml
  23. +1 −3 examples/README.md
  24. +4 −4 examples/caller-example.py
  25. +8 −0 examples/django_example/django_example/settings.py
  26. +6 −0 examples/django_example/mypy.ini
  27. +4 −9 examples/django_example/run_example.sh
  28. +3 −1 examples/docs-example.py
  29. +4 −4 examples/example.py
  30. +27 −0 examples/export_metrics/otel-prometheus.py
  31. +38 −0 examples/export_metrics/otlp-grpc.py
  32. +37 −0 examples/export_metrics/otlp-http.py
  33. +27 −0 examples/export_metrics/prometheus-client.py
  34. +49 −3 examples/fastapi-example.py
  35. +5 −3 examples/fastapi-with-fly-io/README.md
  36. +6 −4 examples/fastapi-with-fly-io/app.py
  37. +17 −11 examples/starlette-otel-exemplars.py
  38. +1,239 −795 poetry.lock
  39. +62 −19 pyproject.toml
  40. +1 −0 src/autometrics/__init__.py
  41. +27 −0 src/autometrics/conftest.py
  42. +11 −1 src/autometrics/constants.py
  43. +150 −59 src/autometrics/decorator.py
  44. 0 src/autometrics/{tracker → }/exemplar.py
  45. +159 −0 src/autometrics/exposition.py
  46. +38 −0 src/autometrics/initialization.py
  47. +9 −0 src/autometrics/objectives.py
  48. 0 src/autometrics/py.typed
  49. +125 −0 src/autometrics/settings.py
  50. +3 −1 src/autometrics/test_caller.py
  51. +176 −38 src/autometrics/test_decorator.py
  52. +190 −0 src/autometrics/test_initialization.py
  53. +16 −0 src/autometrics/test_objectives.py
  54. +55 −0 src/autometrics/test_utils.py
  55. +1 −0 src/autometrics/tracker/__init__.py
  56. +86 −28 src/autometrics/tracker/opentelemetry.py
  57. +93 −21 src/autometrics/tracker/prometheus.py
  58. +82 −0 src/autometrics/tracker/temporary.py
  59. +22 −36 src/autometrics/tracker/test_concurrency.py
  60. +25 −0 src/autometrics/tracker/test_format.py
  61. +37 −30 src/autometrics/tracker/test_tracker.py
  62. +28 −79 src/autometrics/tracker/tracker.py
  63. +69 −0 src/autometrics/tracker/types.py
  64. +89 −1 src/autometrics/utils.py
  65. 0 src/py.typed
4 changes: 4 additions & 0 deletions .github/actions/get-release-version/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
name: Get version
runs:
using: "node16"
main: "get-release-version.js"
16 changes: 16 additions & 0 deletions .github/actions/get-release-version/get-release-version.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
const fs = require("fs");
const path = require("path");
const regex = /version = "([\d.]+)"/gm;

const file_path = path.join(process.env.GITHUB_WORKSPACE, "pyproject.toml");
const file_contents = fs.readFileSync(file_path, { encoding: "utf8" });
const matches = regex.exec(file_contents);
if (matches && matches.length == 2) {
const [_, version] = matches;

fs.appendFileSync(process.env.GITHUB_OUTPUT, `version=${version}`, {
encoding: "utf8",
});
} else {
throw new Error(`No version found in ${file_path}`);
}
14 changes: 14 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
## Description

> Please give context of this change for reviewers. **What did you change, and why?**
## Checklist

- [ ] Describe what you're doing, to help give context for reviewer(s)
- [ ] Link to any helpful documentation (Github issues, linear, Slack discussions, etc)
- [ ] Create test cases
- [ ] Update changelog
<!-- Use these for release PRs:
- [ ] Update package version in `pyproject.toml`
- [ ] Update spec version in `constants.py`
- [ ] Move changes to a new section in `CHANGELOG.md` -->
22 changes: 15 additions & 7 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -12,20 +12,28 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.11", "pypy3.10"]
python-version: ["3.8", "3.12", "pypy3.10"]
env:
FORCE_COLOR: 1
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install dependencies
run: poetry install --no-interaction --no-root --with dev
cache: poetry
- name: Install dependencies (cpython)
if: ${{ matrix.python-version != 'pypy3.10' }}
run: poetry install --no-interaction --no-root --with dev,examples --all-extras
- name: Install dependencies (pypy)
if: ${{ matrix.python-version == 'pypy3.10' }}
run: poetry install --no-interaction --no-root --with dev,examples --extras=exporter-otlp-proto-http
- name: Check code formatting
run: poetry run black .
- name: Lint code
run: poetry run pyright
- name: Lint lib code
run: poetry run mypy src --enable-incomplete-feature=Unpack
- name: Lint lib examples
run: poetry run mypy examples --enable-incomplete-feature=Unpack
- name: Run tests
run: poetry run pytest
run: poetry run pytest -n auto
41 changes: 41 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
name: Release and publish
on: [workflow_dispatch]

permissions:
contents: write

jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/get-release-version
id: release_version
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python@v4
with:
python-version: 3.11
cache: poetry
- name: Install dependencies
run: poetry install --no-interaction --no-root --with dev
- name: Build
run: poetry build
- name: Tag release
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git tag ${{ steps.release_version.outputs.version }}
git push origin ${{ steps.release_version.outputs.version }}
- name: Create release
uses: softprops/action-gh-release@v1
with:
files: dist/*
tag_name: ${{ steps.release_version.outputs.version}}
generate_release_notes: true
name: ${{ steps.release_version.outputs.version}}
- name: Publish
run: poetry run twine upload dist/*
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
54 changes: 54 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -34,6 +34,60 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

-

## [1.0.0] (https://github.com/autometrics-dev/autometrics-py/releases/tag/1.0.0) - 2023-11-14

### Added

- Added support for `record_error_if` and `record_success_if`
- Added OTLP exporters for OpenTelemetry tracker (#89)
- Added `repository_url` and `repository_provider` labels to `build_info` (#97)
- Added `autometrics.version` label to `build_info` (#101)

### Changed

- [💥 Breaking change] `init` function is now required to be called before using autometrics (#89)
- Prometheus exporters are now configured via `init` function (#89)
- Updated examples to call `init` function (#94)
- Updated `docker compose` / `tilt` config in repo to include grafana with our dashboards (#94)
- `Objective`s will now emit a warning when name contains characters other than alphanumeric and dash (#99)

### Security

- Updated FastAPI and Pydantic dependencies in the examples group (#89)
- Updated dependencies in dev and examples groups (#97)

## [0.9](https://github.com/autometrics-dev/autometrics-py/releases/tag/0.9) - 2023-09-24

### Added

- Added the `start_http_server`, which starts a separate HTTP server to expose
the metrics instead of using a separate endpoint in the existing server. (#77)
- Added the `init` function that you can use to configure autometrics. (#77)

### Changed

- Renamed the `function.calls.count` metric to `function.calls` (which is exported
to Prometheus as `function_calls_total`) to be in line with OpenTelemetry and
OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.** (#74)
- When the `function.calls.duration` histogram is exported to Prometheus, it now
includes the units (`function_calls_duration_seconds`) to be in line with
Prometheus/OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.** (#74)
- The `caller` label on the `function.calls` metric was replaced with `caller.function`
and `caller.module` (#75)
- All metrics now have a `service.name` label attached. This is set via runtime environment
variable (`AUTOMETRICS_SERVICE_NAME` or `OTEL_SERVICE_NAME`), or falls back to the package name. (#76)
- In case of running a script outside of module, the `module` label is now set to file name (#80)

### Security

- Updated dependencies in examples group (#77)

## [0.8](https://github.com/autometrics-dev/autometrics-py/releases/tag/0.8) - 2023-07-24

### Added

- Support for prometheus-client 0.17.x

## [0.7](https://github.com/autometrics-dev/autometrics-py/releases/tag/0.7) - 2023-07-19

### Added
23 changes: 0 additions & 23 deletions LICENSE

This file was deleted.

176 changes: 176 additions & 0 deletions LICENSE-APACHE
Original file line number Diff line number Diff line change
@@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.

"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:

(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and

(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.

You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS
21 changes: 21 additions & 0 deletions LICENSE-MIT
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 Fiberplane B.V.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
279 changes: 227 additions & 52 deletions README.md

Large diffs are not rendered by default.

10 changes: 10 additions & 0 deletions Tiltfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
docker_compose(['configs/compose/infra.yaml', 'configs/compose/examples.yaml'])

dc_resource('am', labels=["infra"])
dc_resource('grafana', labels=["infra"])
dc_resource('otel-collector', labels=["infra"])
dc_resource('push-gateway', labels=["infra"])
dc_resource('django', labels=["examples"])
dc_resource('fastapi', labels=["examples"])
dc_resource('otlp', labels=["examples"])
dc_resource('starlette', labels=["examples"])
44 changes: 44 additions & 0 deletions configs/compose/examples.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
version: "3.8"

services:
django:
container_name: django
build:
context: ../..
dockerfile: configs/docker/base.Dockerfile
args:
PORT: 9464
COPY_PATH: examples/django_example
COMMAND: ./run_example.sh
ports:
- "9464:9464"
fastapi:
container_name: fastapi
build:
context: ../..
dockerfile: configs/docker/base.Dockerfile
args:
PORT: 8080
COPY_PATH: examples/fastapi-example.py
COMMAND: poetry run python3 fastapi-example.py
ports:
- "9465:8080"
starlette:
container_name: starlette
build:
context: ../..
dockerfile: configs/docker/base.Dockerfile
args:
PORT: 8080
COPY_PATH: examples/starlette-otel-exemplars.py
COMMAND: poetry run python3 starlette-otel-exemplars.py
ports:
- "9466:8080"
otlp:
container_name: otlp
build:
context: ../..
dockerfile: configs/docker/base.Dockerfile
args:
COPY_PATH: examples/export_metrics/otlp-http.py
COMMAND: poetry run python3 otlp-http.py
48 changes: 48 additions & 0 deletions configs/compose/infra.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
version: "3.8"

volumes:
app-logs: {}
grafana-storage: {}

services:
am:
container_name: am
image: autometrics/am:latest
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "6789:6789"
- "9090:9090"
command: "start http://otel-collector:9464/metrics host.docker.internal:9464 host.docker.internal:9465 host.docker.internal:9466"
environment:
- LISTEN_ADDRESS=0.0.0.0:6789
restart: unless-stopped
volumes:
- app-logs:/var/log
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ../otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317"
- "4318:4318"
- "8888:8888" # expose container metrics in prometheus format
- "55680:55680"
- "55679:55679"
restart: unless-stopped
push-gateway:
container_name: push-gateway
image: ghcr.io/zapier/prom-aggregation-gateway:latest
grafana:
container_name: grafana
image: grafana/grafana-oss
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- grafana-storage:/var/lib/grafana
- ../grafana/config.ini:/etc/grafana/grafana.ini
- ../grafana/dashboards:/var/lib/grafana/dashboards
- ../grafana/provisioning:/etc/grafana/provisioning
17 changes: 17 additions & 0 deletions configs/docker/base.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@

FROM python:latest
ARG COPY_PATH
ARG COMMAND
ARG PORT
WORKDIR /app
RUN apt-get update
RUN pip install poetry
COPY pyproject.toml poetry.lock src ./
RUN poetry config virtualenvs.create false
RUN poetry install --no-interaction --no-root --with examples --extras "exporter-otlp-proto-http"
COPY $COPY_PATH ./
ENV OTEL_EXPORTER_OTLP_ENDPOINT http://host.docker.internal:4318
ENV COMMAND $COMMAND
ENV PORT $PORT
EXPOSE $PORT
CMD ["sh", "-c", "$COMMAND"]
4 changes: 4 additions & 0 deletions configs/grafana/config.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[auth.anonymous]
disable_login_form = true
enabled = true
org_role = Admin
392 changes: 392 additions & 0 deletions configs/grafana/dashboards/Autometrics Function Explorer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,392 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 19,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "Calls per Second",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 4,
"options": {
"legend": {
"calcs": [
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "9.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "sum by (function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\",\n function=~\"${function}\"\n }[$__rate_interval]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n)",
"format": "time_series",
"instant": false,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "Request Rate",
"transformations": [],
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "max"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "% of Function Calls That Errored",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 8
},
"id": 2,
"options": {
"legend": {
"calcs": [
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "9.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "(\n sum by(function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\",\n result=\"error\", \n function=~\"${function}\"\n }[$__rate_interval]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n )) / (\n sum by(function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\",\n function=~\"${function}\"\n }[$__rate_interval]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n ))",
"interval": "",
"legendFormat": "",
"range": true,
"refId": "A"
}
],
"title": "Error Ratio",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"description": "This shows the 99th and 95th percentile latency or response time for the given function.\n\nFor example, if the 99th percentile latency is 500 milliseconds, that means that 99% of calls to the function are handled within 500ms or less.",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "max"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "Function Call Duration",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 5,
"options": {
"legend": {
"calcs": [
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "9.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "label_replace(\n histogram_quantile(0.99, \n sum by (le, function, module, service_name, commit, version) (\n rate({__name__=~\"function_calls_duration(_seconds)?_bucket\", function=~\"$function\"}[$__rate_interval])\n # Attach the `version` and `commit` labels from the `build_info` metric \n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n )\n ),\n # Add the label {percentile_latency=\"99\"} to the time series\n \"percentile_latency\", \"99\", \"\", \"\"\n)\nor\nlabel_replace(\n histogram_quantile(0.95, \n sum by (le, function, module, service_name, commit, version) (\n rate({__name__=~\"function_calls_duration(_seconds)?_bucket\", function=~\"$function\"}[$__rate_interval])\n # Attach the `version` and `commit` labels from the `build_info` metric \n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n )\n ),\n # Add the label {percentile_latency=\"95\"} to the time series\n \"percentile_latency\", \"95\", \"\", \"\"\n)",
"interval": "",
"legendFormat": "",
"range": true,
"refId": "A"
}
],
"title": "Latency (95th and 99th Percentile)",
"type": "timeseries"
}
],
"refresh": "5m",
"revision": 1,
"schemaVersion": 38,
"style": "dark",
"tags": [
"autometrics"
],
"templating": {
"list": [
{
"allValue": "__none__",
"current": {},
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"definition": "label_values({__name__=~\"function_calls(_count)?(_total)?\"}, function)",
"hide": 0,
"includeAll": false,
"label": "Show Function(s)",
"multi": true,
"name": "function",
"options": [],
"query": {
"query": "label_values({__name__=~\"function_calls(_count)?(_total)?\"}, function)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": "",
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Autometrics Function Explorer",
"uid": "autometrics-function-explorer",
"version": 1,
"weekStart": ""
}
356 changes: 356 additions & 0 deletions configs/grafana/dashboards/Autometrics Overview.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,356 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 20,
"links": [],
"liveNow": false,
"panels": [
{
"collapsed": false,
"datasource": {
"type": "prometheus",
"uid": "Sc9Taxa4z"
},
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"panels": [],
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "Sc9Taxa4z"
},
"refId": "A"
}
],
"title": "Autometrics-Instrumented Functions",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"description": "Calls per second is calculated as the average over a 5-minute window",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "Calls per Second",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 1
},
"id": 4,
"options": {
"legend": {
"calcs": [
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "9.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "sum by (function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\", \n function=~\"${functions_top_request_rate}\"\n }[5m]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n)",
"format": "time_series",
"instant": false,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"title": "Request Rate (Top $num_function_limit)",
"transformations": [],
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic",
"seriesBy": "max"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "Error Rate",
"axisPlacement": "auto",
"axisSoftMax": 1,
"axisSoftMin": 0,
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 9
},
"id": 2,
"options": {
"legend": {
"calcs": [
"lastNotNull",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Max",
"sortDesc": true
},
"tooltip": {
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "9.4.1",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"editorMode": "code",
"expr": "(\n sum by(function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\", \n result=\"error\", \n function=~\"${functions_top_error_rate}\"\n }[5m]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n )) / (\n sum by(function, module, service_name, version, commit) (\n rate(\n {\n __name__=~\"function_calls(_count)?(_total)?\", \n function=~\"${functions_top_error_rate}\"\n }[5m]\n )\n * on(instance, job) group_left(version, commit) (last_over_time(build_info[$__rate_interval]) or on (instance, job) up)\n ))",
"interval": "",
"legendFormat": "",
"range": true,
"refId": "A"
}
],
"title": "Error Rate (Top $num_function_limit)",
"type": "timeseries"
}
],
"refresh": "5m",
"revision": 1,
"schemaVersion": 38,
"style": "dark",
"tags": [
"autometrics"
],
"templating": {
"list": [
{
"allValue": "",
"current": {},
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"definition": "query_result(topk($num_function_limit, sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\"}[$__range]))))\n",
"hide": 2,
"includeAll": true,
"multi": true,
"name": "functions_top_request_rate",
"options": [],
"query": {
"query": "query_result(topk($num_function_limit, sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\"}[$__range]))))\n",
"refId": "StandardVariableQuery"
},
"refresh": 2,
"regex": "/function=\"(\\w+)\"/",
"skipUrlSync": false,
"sort": 4,
"type": "query"
},
{
"current": {
"selected": false,
"text": "10",
"value": "10"
},
"hide": 0,
"label": "Top Functions to Display",
"name": "num_function_limit",
"options": [
{
"selected": true,
"text": "10",
"value": "10"
}
],
"query": "10",
"skipUrlSync": false,
"type": "textbox"
},
{
"allValue": "",
"current": {},
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"definition": "query_result(topk($num_function_limit, sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\", result=\"error\"}[$__range])) / (sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\"}[$__range])))))\n",
"hide": 2,
"includeAll": true,
"multi": true,
"name": "functions_top_error_rate",
"options": [],
"query": {
"query": "query_result(topk($num_function_limit, sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\", result=\"error\"}[$__range])) / (sum by (function, module, service_name) (rate({__name__=~\"function_calls(_count)?(_total)?\"}[$__range])))))\n",
"refId": "StandardVariableQuery"
},
"refresh": 2,
"regex": "/function=\"(\\w+)\"/",
"skipUrlSync": false,
"sort": 4,
"type": "query"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Autometrics Overview",
"uid": "autometrics-overview",
"version": 1,
"weekStart": ""
}

Large diffs are not rendered by default.

24 changes: 24 additions & 0 deletions configs/grafana/provisioning/dashboards/dashboards.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
apiVersion: 1

providers:
# <string> an unique provider name. Required
- name: ${DS_PROMETHEUS}
# <int> Org id. Default to 1
orgId: 1
# <string> name of the dashboard folder.
folder: 'Autometrics'
# <string> folder UID. will be automatically generated if not specified
folderUid: ''
# <string> provider type. Default to 'file'
type: file
# <bool> disable dashboard deletion
disableDeletion: false
# <int> how often Grafana will scan for changed dashboards
updateIntervalSeconds: 10
# <bool> allow updating provisioned dashboards from the UI
allowUiUpdates: false
options:
# <string, required> path to dashboard files on disk. Required when using the 'file' type
path: /var/lib/grafana/dashboards
# <bool> use folder names from filesystem to create folders in Grafana
foldersFromFilesStructure: true
12 changes: 12 additions & 0 deletions configs/grafana/provisioning/datasources/datasource.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
apiVersion: 1

datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
# Use the name and container port from docker compose
url: http://am:9090/prometheus
basicAuth: false
isDefault: true
editable: true
23 changes: 23 additions & 0 deletions configs/otel-collector-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
receivers:
otlp:
protocols:
grpc:
http:

exporters:
logging:
loglevel: debug
prometheus:
endpoint: "0.0.0.0:9464" # This is where Prometheus will scrape the metrics from.
# namespace: <namespace> # Replace with your namespace.


processors:
batch:

service:
pipelines:
metrics:
receivers: [otlp]
processors: []
exporters: [logging, prometheus]
3 changes: 3 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include:
- configs/compose/infra.yaml
- configs/compose/examples.yaml
4 changes: 1 addition & 3 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -69,8 +69,6 @@ This is a default Django project with autometrics configured. You can find examp

## `starlette-otel-exemplars.py`

This app shows how to use the OpenTelemetry integration to add exemplars to your metrics. In a distributed system, it allows you to track a request as it flows through your system by adding trace/span ids to it. We can catch these ids from OpenTelemetry and expose them to Prometheus as exemplars. Do note that exemplars are an experimental feature and you need to enable it in Prometheus with a `--enable-feature=exemplar-storage` flag. Run the example with a command:

`AUTOMETRICS_TRACKER=prometheus AUTOMETRICS_EXEMPLARS=true uvicorn starlette-otel-exemplars:app --port 8080`
This app shows how to use the OpenTelemetry integration to add exemplars to your metrics. In a distributed system, it allows you to track a request as it flows through your system by adding trace/span ids to it. We can catch these ids from OpenTelemetry and expose them to Prometheus as exemplars. Do note that exemplars are an experimental feature and you need to enable it in Prometheus with a `--enable-feature=exemplar-storage` flag.

> Don't forget to configure Prometheus itself to scrape the metrics endpoint. Refer to the example `prometheus.yaml` file in the root of this project on how to set this up.
8 changes: 4 additions & 4 deletions examples/caller-example.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
from prometheus_client import start_http_server
from autometrics import autometrics
from autometrics import autometrics, init
import time
import random

@@ -34,8 +33,9 @@ def destiny():
return f"Destiny is calling simba. simba says: {simba()}"


# Start an HTTP server on port 8080 using the Prometheus client library, which exposes our metrics to prometheus
start_http_server(8080)
# Initialize autometrics and start an HTTP server on port 8080 using
# the Prometheus client library, which exposes our metrics to prometheus
init(exporter={"type": "prometheus", "port": 8080})

print(f"Try this PromQL query in your Prometheus dashboard:\n")
print(
8 changes: 8 additions & 0 deletions examples/django_example/django_example/settings.py
Original file line number Diff line number Diff line change
@@ -11,6 +11,14 @@
"""

from pathlib import Path
from autometrics import init

init(
branch="main",
commit="67a1b3a",
version="0.1.0",
service_name="django",
)

# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
6 changes: 6 additions & 0 deletions examples/django_example/mypy.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[mypy]
plugins =
mypy_django_plugin.main

[mypy.plugins.django-stubs]
django_settings_module = "django_example.settings"
13 changes: 4 additions & 9 deletions examples/django_example/run_example.sh
Original file line number Diff line number Diff line change
@@ -1,15 +1,10 @@
#!/bin/sh

export AUTOMETRICS_COMMIT=67a1b3a
export AUTOMETRICS_VERSION=0.1.0
export AUTOMETRICS_BRANCH=main
export AUTOMETRICS_TRACKER=prometheus

# run the server itself
poetry run python manage.py runserver 8080 &
# run the locust load test and pipe stdout to dev/null
poetry run locust --host=http://localhost:8080 --users=100 --headless &
poetry run python manage.py runserver 0.0.0.0:9464 &
# run the locust load test
poetry run locust --host=http://localhost:9464 --users=100 --headless --skip-log-setup &

# kill all child processes on exit
trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT
trap "trap - SIGTERM && kill -- -$$" INT TERM EXIT
wait
4 changes: 3 additions & 1 deletion examples/docs-example.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
from autometrics import autometrics
from autometrics import autometrics, init

init()


@autometrics
8 changes: 4 additions & 4 deletions examples/example.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import time
import random
from prometheus_client import start_http_server
from autometrics import autometrics
from autometrics import autometrics, init
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile


@@ -63,8 +62,9 @@ def random_error():
# Show the docstring (with links to prometheus metrics) for the `div_unhandled` method
print(div_unhandled.__doc__)

# Start an HTTP server on port 8080 using the Prometheus client library, which exposes our metrics to prometheus
start_http_server(8080)
# Initialize autometrics and start an HTTP server on port 8080 using
# the Prometheus client library, which exposes our metrics to prometheus
init(exporter={"type": "prometheus", "port": 8080})

# Enter an infinite loop (with a 2 second sleep period), calling the "div_handled", "add", and "div_unhandled" methods,
# in order to generate metrics.
27 changes: 27 additions & 0 deletions examples/export_metrics/otel-prometheus.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import time
from autometrics import autometrics, init

# Autometrics supports exporting metrics to Prometheus via the OpenTelemetry.
# This example uses the Prometheus Python client, available settings are same as the
# Prometheus Python client. By default, the Prometheus exporter will expose metrics
# on port 9464. If you don't have a Prometheus server running, you can run Tilt or
# Docker Compose from the root of this repo to start one up.

init(
tracker="opentelemetry",
exporter={
"type": "prometheus",
"port": 9464,
},
service_name="prom-exporter",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
38 changes: 38 additions & 0 deletions examples/export_metrics/otlp-grpc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import time
from autometrics import autometrics, init
from opentelemetry.sdk.metrics import Counter
from opentelemetry.sdk.metrics.export import (
AggregationTemporality,
)

# Autometrics supports exporting metrics to OTLP collectors via gRPC and HTTP transports.
# This example uses the gRPC transport, available settings are similar to the OpenTelemetry
# Python SDK. By default, the OTLP exporter will send metrics to localhost:4317.
# If you don't have an OTLP collector running, you can run Tilt or Docker Compose
# to start one up.

init(
exporter={
"type": "otlp-proto-grpc",
"endpoint": "http://localhost:4317", # You don't need to set this if you are using the default endpoint
"insecure": True, # Enabled for http transport
"push_interval": 1000,
# Here are some other available settings:
# "timeout": 10,
# "headers": {"x-something": "value"},
# "aggregation_temporality": {
# Counter: AggregationTemporality.CUMULATIVE,
# },
},
service_name="otlp-exporter",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
37 changes: 37 additions & 0 deletions examples/export_metrics/otlp-http.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import time
from autometrics import autometrics, init
from opentelemetry.sdk.metrics import Counter
from opentelemetry.sdk.metrics.export import (
AggregationTemporality,
)

# Autometrics supports exporting metrics to OTLP collectors via gRPC and HTTP transports.
# This example uses the HTTP transport, available settings are similar to the OpenTelemetry
# Python SDK. By default, the OTLP exporter will send metrics to localhost:4318.
# If you don't have an OTLP collector running, you can run Tilt or Docker Compose
# to start one up.

init(
exporter={
"type": "otlp-proto-http",
"endpoint": "http://localhost:4318/", # You don't need to set this if you are using the default endpoint
"push_interval": 1000,
# Here are some other available settings:
# "timeout": 10,
# "headers": {"x-something": "value"},
# "aggregation_temporality": {
# Counter: AggregationTemporality.CUMULATIVE,
# },
},
service_name="otlp-exporter",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
27 changes: 27 additions & 0 deletions examples/export_metrics/prometheus-client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import time
from autometrics import autometrics, init

# Autometrics supports exporting metrics to Prometheus via the Prometheus Python client.
# This example uses the Prometheus Python client, available settings are same as the
# Prometheus Python client. By default, the Prometheus exporter will expose metrics
# on port 9464. If you don't have a Prometheus server running, you can run Tilt or
# Docker Compose from the root of this repo to start one up.

init(
tracker="prometheus",
exporter={
"type": "prometheus",
"port": 9464,
},
service_name="prom-exporter",
)


@autometrics
def my_function():
pass


while True:
my_function()
time.sleep(1)
52 changes: 49 additions & 3 deletions examples/fastapi-example.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
import asyncio
from fastapi import FastAPI, Response
import uvicorn
from autometrics import autometrics

from autometrics import autometrics, init
from fastapi import FastAPI, Response
from fastapi.responses import JSONResponse
from prometheus_client import generate_latest

app = FastAPI()
@@ -43,5 +45,49 @@ async def do_something_async():
return "async world"


def response_is_error(response: Response):
if response.status_code >= 400:
return True


@app.get("/not-implemented")
@autometrics(record_error_if=response_is_error)
def not_implemented():
return JSONResponse(
status_code=501, content={"message": "This endpoint is not implemented"}
)


@app.get("/flowers/{flower_name}")
def flower(flower_name: str):
try:
return JSONResponse(content={"message": get_pretty_flower(flower_name)})
except NotFoundError as error:
return JSONResponse(status_code=404, content={"message": str(error)})


class NotFoundError(Exception):
pass


def is_not_found_error(error: Exception):
return isinstance(error, NotFoundError)


@autometrics(record_success_if=is_not_found_error)
def get_pretty_flower(flower_name: str):
"""Returns whether the flower is pretty"""
print(f"Getting pretty flower for {flower_name}")
flowers = ["rose", "tulip", "daisy"]
if flower_name not in flowers:
raise NotFoundError(
f"Flower {flower_name} not found. Perhaps you meant one of these: {', '.join(flowers)}?"
)
return f"A {flower_name} is pretty"


init(service_name="fastapi")


if __name__ == "__main__":
uvicorn.run(app, host="localhost", port=8080)
uvicorn.run(app, host="0.0.0.0", port=8080)
8 changes: 5 additions & 3 deletions examples/fastapi-with-fly-io/README.md
Original file line number Diff line number Diff line change
@@ -54,7 +54,7 @@ After this we're ready to add some code. Create a file named `app.py` in your fa

```python
import time
from autometrics import autometrics
from autometrics import autometrics, init
# Import below is needed for the service level objective (SLO) support
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile
from fastapi import FastAPI, Response
@@ -104,11 +104,13 @@ def do_something():
# This function doesn't do much
print("done")

# In order for prometheus to get the data we'll set
# Before starting the server, we need to initialize the autometrics
# by calling init(). In order for prometheus to get the data
# we'll also pass the configuration that will set
# up a separate endpoint that exposes data in a format
# that prometheus can understand.
# This metrics server will run on port 8008
start_http_server(8008)
init(exporter={"type": "prometheus", "port": 8008})

# If the app is not run by fly.io in a container but using python
# directly we enter this flow and it is run on port 8080
10 changes: 6 additions & 4 deletions examples/fastapi-with-fly-io/app.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
import time
from autometrics import autometrics
from autometrics import autometrics, init
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile
from fastapi import FastAPI, Response
from prometheus_client import start_http_server
import uvicorn

app = FastAPI()
@@ -50,11 +49,14 @@ def do_something():
print("done")


# In order for prometheus to get the data we'll set
# Before starting the server, we need to initialize the autometrics
# by calling init(). In order for prometheus to get the data
# we'll also pass the configuration that will set
# up a separate endpoint that exposes data in a format
# that prometheus can understand.
# This metrics server will run on port 8008
start_http_server(8008)
init(exporter={"type": "prometheus", "port": 8008})


# If the app is not run by fly.io in a container but using python
# directly we enter this flow and it is run on port 8080
28 changes: 17 additions & 11 deletions examples/starlette-otel-exemplars.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
from opentelemetry import trace
from autometrics import autometrics
from prometheus_client import REGISTRY
from prometheus_client.openmetrics.exposition import generate_latest
from starlette import applications
from starlette.responses import PlainTextResponse
from starlette.routing import Route
import uvicorn

from autometrics import autometrics, init
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
ConsoleSpanExporter,
)
from prometheus_client import REGISTRY
from prometheus_client.openmetrics.exposition import generate_latest
from starlette import applications
from starlette.responses import PlainTextResponse
from starlette.routing import Route

# Let's start by setting up the OpenTelemetry SDK with some defaults
provider = TracerProvider()
@@ -21,6 +22,10 @@
# Now we can instrument our Starlette application
tracer = trace.get_tracer(__name__)

# Exemplars support requires some additional configuration on autometrics,
# so we need to initialize it with the proper settings
init(tracker="prometheus", enable_exemplars=True, service_name="starlette")


# We need to add tracer decorator before autometrics so that we see the spans
@tracer.start_as_current_span("request")
@@ -39,7 +44,7 @@ def inner_function():

def metrics(request):
# Exemplars are not supported by default prometheus format, so we specifically
# make an endpoint that uses the OpenMetrics format that supoorts exemplars.
# make an endpoint that uses the OpenMetrics format that supports exemplars.
body = generate_latest(REGISTRY)
return PlainTextResponse(body, media_type="application/openmetrics-text")

@@ -48,9 +53,10 @@ def metrics(request):
routes=[Route("/", outer_function), Route("/metrics", metrics)]
)

# Now, start the app (env variables are required to enable exemplars):
# AUTOMETRICS_TRACKER=prometheus AUTOMETRICS_EXEMPLARS=true uvicorn starlette-otel-exemplars:app --port 8080
# And make some requests to /. You should see the spans in the console.
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8080)

# Start the app and make some requests to http://127.0.0.1:8080/, you should see the spans in the console.
# With autometrics extension installed, you can now hover over the hello handler
# and see the charts and queries associated with them. Open one of the queries
# in Prometheus and you should see exemplars added to the metrics. Don't forget
2,034 changes: 1,239 additions & 795 deletions poetry.lock

Large diffs are not rendered by default.

81 changes: 62 additions & 19 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,50 +1,89 @@
[tool.poetry]
name = "autometrics"
version = "0.7"
version = "1.0.0"
description = "Easily add metrics to your system – and actually understand them using automatically customized Prometheus queries"
authors = ["Fiberplane <info@fiberplane.com>"]
license = "MIT OR Apache-2.0"
readme = "README.md"
repository = "https://github.com/autometrics-dev/autometrics-py"
homepage = "https://github.com/autometrics-dev/autometrics-py"
keywords = ["metrics", "telemetry", "prometheus", "monitoring", "observability", "instrumentation"]
# classifiers = [
# "Topic :: Software Development :: Build Tools",
# "Topic :: Software Development :: Libraries :: Python Modules"
# ]
packages = [{include = "autometrics", from = "src"}]
keywords = [
"metrics",
"telemetry",
"prometheus",
"monitoring",
"observability",
"instrumentation",
"tracing",
]
classifiers = [
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Monitoring",
"Typing :: Typed",
]
packages = [{ include = "autometrics", from = "src" }]

[tool.poetry.dependencies]
opentelemetry-api = "^1.17.0"
opentelemetry-exporter-prometheus = "^1.12.0rc1"
# The prometheus exporter is pinned to a beta version because of how opentelemetry-python has been releasing it.
# Technically, the version 1.12.0rc1 is the "latest" on pypi, but it's not the latest release.
# 0.41b0 includes the fix for exporting gauge values (previously they were always turned into counters).
opentelemetry-exporter-prometheus = "0.41b0"
opentelemetry-exporter-otlp-proto-http = { version = "^1.20.0", optional = true }
opentelemetry-exporter-otlp-proto-grpc = { version = "^1.20.0", optional = true }
opentelemetry-sdk = "^1.17.0"
prometheus-client = "0.16.0"
prometheus-client = "^0.16.0 || ^0.17.0"
pydantic = "^2.4.1"
python = "^3.8"
python-dotenv = "1.0.0"
python-dotenv = "^1.0.0"
typing-extensions = "^4.5.0"

[tool.poetry.extras]
exporter-otlp-proto-http = ["opentelemetry-exporter-otlp-proto-http"]
exporter-otlp-proto-grpc = ["opentelemetry-exporter-otlp-proto-grpc"]

[tool.poetry.group.dev]
optional = true

[tool.mypy]
namespace_packages = true
mypy_path = "src"
enable_incomplete_feature = "Unpack"

# This override is needed because with certain flavors of python and
# mypy you can get the following error:
# opentelemetry/attributes/__init__.py:14: error: invalid syntax
# Which at the time of writing is a line that states ignore types:
# `# type: ignore`
[[tool.mypy.overrides]]
module = ["opentelemetry.attributes"]
follow_imports = "skip"

[tool.pytest.ini_options]
usefixtures = "reset_environment"

[tool.poetry.group.dev.dependencies]
pyright = "^1.1.307"
pytest = "^7.3.0"
pytest-asyncio = "^0.21.0"
black = "^23.3.0"
pytest-xdist = "^3.3.1"
mypy = "^1.5.1"
twine = "4.0.2"


[tool.poetry.group.examples]
optional = true

[tool.poetry.group.examples.dependencies]
anyio = "3.6.2"
anyio = "3.7.1"
bleach = "6.0.0"
build = "0.10.0"
certifi = "2022.12.7"
certifi = "2023.7.22"
charset-normalizer = "3.1.0"
click = "8.1.3"
django = "^4.2"
docutils = "0.19"
fastapi = "0.97.0"
fastapi = "^0.103.1"
h11 = "0.14.0"
idna = "3.4"
# pinned importlib-metadat to version ~6.0.0 because of opentelemetry-api
@@ -56,8 +95,7 @@ mdurl = "0.1.2"
more-itertools = "9.1.0"
packaging = "23.0"
pkginfo = "1.9.6"
pydantic = "1.10.6"
pygments = "2.14.0"
pygments = "2.16.1"
pyproject-hooks = "1.0.0"
readme-renderer = "37.3"
requests = "2.31.0"
@@ -67,12 +105,17 @@ rich = "13.3.2"
six = "1.16.0"
sniffio = "1.3.0"
starlette = ">=0.27.0,<0.28.0"
twine = "4.0.2"
urllib3 = "1.26.15"
urllib3 = "1.26.18"
uvicorn = "0.21.1"
webencodings = "0.5.1"
zipp = "3.15.0"
locust = "^2.15.1"
django-stubs = "4.2.3"


[tool.poetry.group.development.dependencies]
types-requests = "^2.31.0.2"
django-stubs = "^4.2.3"

[build-system]
requires = ["poetry-core"]
1 change: 1 addition & 0 deletions src/autometrics/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
from .decorator import *
from .initialization import init
27 changes: 27 additions & 0 deletions src/autometrics/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import os
import pytest

in_ci = os.getenv("CI", "false") == "true"


@pytest.fixture()
def reset_environment(monkeypatch):
import importlib
import opentelemetry
import prometheus_client
from . import initialization
from .tracker import tracker

importlib.reload(opentelemetry)
importlib.reload(prometheus_client)
importlib.reload(initialization)
importlib.reload(tracker)
# we'll set debug to true to ensure calling init more than once will fail whole test
monkeypatch.setenv("AUTOMETRICS_DEBUG", "true")

# github ci uses https so for tests to pass we force ssh url
if in_ci:
monkeypatch.setenv(
"AUTOMETRICS_REPOSITORY_URL",
"git@github.com:autometrics-dev/autometrics-py.git",
)
12 changes: 11 additions & 1 deletion src/autometrics/constants.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,25 @@
"""Constants used by autometrics"""

COUNTER_NAME = "function.calls.count"
SPEC_VERSION = "1.0.0"

COUNTER_NAME = "function.calls"
HISTOGRAM_NAME = "function.calls.duration"
CONCURRENCY_NAME = "function.calls.concurrent"
# NOTE - The Rust implementation does not use `build.info`, instead opts for just `build_info`
BUILD_INFO_NAME = "build_info"
SERVICE_NAME = "service.name"
REPOSITORY_URL = "repository.url"
REPOSITORY_PROVIDER = "repository.provider"
AUTOMETRICS_VERSION = "autometrics.version"


COUNTER_NAME_PROMETHEUS = COUNTER_NAME.replace(".", "_")
HISTOGRAM_NAME_PROMETHEUS = HISTOGRAM_NAME.replace(".", "_")
CONCURRENCY_NAME_PROMETHEUS = CONCURRENCY_NAME.replace(".", "_")
SERVICE_NAME_PROMETHEUS = SERVICE_NAME.replace(".", "_")
REPOSITORY_URL_PROMETHEUS = REPOSITORY_URL.replace(".", "_")
REPOSITORY_PROVIDER_PROMETHEUS = REPOSITORY_PROVIDER.replace(".", "_")
AUTOMETRICS_VERSION_PROMETHEUS = AUTOMETRICS_VERSION.replace(".", "_")

COUNTER_DESCRIPTION = "Autometrics counter for tracking function calls"
HISTOGRAM_DESCRIPTION = "Autometrics histogram for tracking function call duration"
209 changes: 150 additions & 59 deletions src/autometrics/decorator.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
"""Autometrics module."""
from contextvars import ContextVar
import time
import inspect

from contextvars import ContextVar, Token
from functools import wraps
from typing import overload, TypeVar, Callable, Optional, Awaitable
from typing import overload, TypeVar, Callable, Optional, Awaitable, Union, Coroutine
from typing_extensions import ParamSpec

from .objectives import Objective
@@ -15,38 +15,60 @@
append_docs_to_docstring,
)

Params = ParamSpec("Params")
R = TypeVar("R")
Y = TypeVar("Y")
S = TypeVar("S")

P = ParamSpec("P")
T = TypeVar("T")
caller_module_var: ContextVar[str] = ContextVar("caller.module", default="")
caller_function_var: ContextVar[str] = ContextVar("caller.function", default="")


caller_var: ContextVar[str] = ContextVar("caller", default="")


# Bare decorator usage
# Decorator with arguments (where decorated function returns an awaitable)
@overload
def autometrics(func: Callable[P, T]) -> Callable[P, T]:
def autometrics(
func: None = None,
*,
objective: Optional[Objective] = None,
track_concurrency: Optional[bool] = False,
record_error_if: Callable[[R], bool],
record_success_if: Optional[Callable[[Exception], bool]] = None,
) -> Union[
Callable[
[Callable[Params, Coroutine[Y, S, R]]], Callable[Params, Coroutine[Y, S, R]]
],
Callable[[Callable[Params, R]], Callable[Params, R]],
]:
...


# Decorator with arguments (where decorated function returns an awaitable)
@overload
def autometrics(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
def autometrics(
func: None = None,
*,
objective: Optional[Objective] = None,
track_concurrency: Optional[bool] = False,
record_success_if: Optional[Callable[[Exception], bool]] = None,
) -> Callable[[Callable[Params, R]], Callable[Params, R]]:
...


# Decorator with arguments
# Using the func parameter
# i.e. using @autometrics()
@overload
def autometrics(
*, objective: Optional[Objective] = None, track_concurrency: Optional[bool] = False
) -> Callable:
func: Callable[Params, R],
) -> Callable[Params, R]:
...


def autometrics(
func: Optional[Callable] = None,
*,
objective: Optional[Objective] = None,
track_concurrency: Optional[bool] = False,
func=None,
objective=None,
track_concurrency=None,
record_error_if=None,
record_success_if=None,
):
"""Decorator for tracking function calls and duration. Supports synchronous and async functions."""

@@ -63,118 +85,187 @@ def track_start(function: str, module: str):
function=function, module=module, track_concurrency=track_concurrency
)

def track_result_ok(start_time: float, function: str, module: str, caller: str):
def track_result_ok(
duration: float,
function: str,
module: str,
caller_module: str,
caller_function: str,
):
get_tracker().finish(
start_time,
duration,
function=function,
module=module,
caller=caller,
caller_module=caller_module,
caller_function=caller_function,
objective=objective,
track_concurrency=track_concurrency,
result=Result.OK,
)

def track_result_error(
start_time: float,
duration: float,
function: str,
module: str,
caller: str,
caller_module: str,
caller_function: str,
):
get_tracker().finish(
start_time,
duration,
function=function,
module=module,
caller=caller,
caller_module=caller_module,
caller_function=caller_function,
objective=objective,
track_concurrency=track_concurrency,
result=Result.ERROR,
)

def sync_decorator(func: Callable[P, T]) -> Callable[P, T]:
def sync_decorator(func: Callable[Params, R]) -> Callable[Params, R]:
"""Helper for decorating synchronous functions, to track calls and duration."""

module_name = get_module_name(func)
func_name = get_function_name(func)
register_function_info(func_name, module_name)

@wraps(func)
def sync_wrapper(*args: P.args, **kwds: P.kwargs) -> T:
def sync_wrapper(*args: Params.args, **kwds: Params.kwargs) -> R:
caller_module = caller_module_var.get()
caller_function = caller_function_var.get()
context_token_module: Optional[Token] = None
context_token_function: Optional[Token] = None
start_time = time.time()
caller = caller_var.get()
context_token = None

try:
context_token = caller_var.set(func_name)
context_token_module = caller_module_var.set(module_name)
context_token_function = caller_function_var.set(func_name)
if track_concurrency:
track_start(module=module_name, function=func_name)
result = func(*args, **kwds)
track_result_ok(
start_time, function=func_name, module=module_name, caller=caller
)
duration = time.time() - start_time
if record_error_if and record_error_if(result):
track_result_error(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
else:
track_result_ok(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)

except Exception as exception:
result = exception.__class__.__name__
track_result_error(
start_time,
function=func_name,
module=module_name,
caller=caller,
)
duration = time.time() - start_time
if record_success_if and record_success_if(exception):
track_result_ok(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
else:
track_result_error(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
# Reraise exception
raise exception

finally:
if context_token is not None:
caller_var.reset(context_token)
if context_token_module is not None:
caller_module_var.reset(context_token_module)
if context_token_function is not None:
caller_function_var.reset(context_token_function)

return result

sync_wrapper.__doc__ = append_docs_to_docstring(func, func_name, module_name)
return sync_wrapper

def async_decorator(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
def async_decorator(
func: Callable[Params, Awaitable[R]]
) -> Callable[Params, Awaitable[R]]:
"""Helper for decorating async functions, to track calls and duration."""

module_name = get_module_name(func)
func_name = get_function_name(func)
register_function_info(func_name, module_name)

@wraps(func)
async def async_wrapper(*args: P.args, **kwds: P.kwargs) -> T:
async def async_wrapper(*args: Params.args, **kwds: Params.kwargs) -> R:
caller_module = caller_module_var.get()
caller_function = caller_function_var.get()
context_token_module: Optional[Token] = None
context_token_function: Optional[Token] = None
start_time = time.time()
caller = caller_var.get()
context_token = None

try:
context_token = caller_var.set(func_name)
context_token_module = caller_module_var.set(module_name)
context_token_function = caller_function_var.set(func_name)
if track_concurrency:
track_start(module=module_name, function=func_name)
result = await func(*args, **kwds)
track_result_ok(
start_time, function=func_name, module=module_name, caller=caller
)
duration = time.time() - start_time
if record_error_if and record_error_if(result):
track_result_error(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
else:
track_result_ok(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)

except Exception as exception:
result = exception.__class__.__name__
track_result_error(
start_time,
function=func_name,
module=module_name,
caller=caller,
)
duration = time.time() - start_time
if record_success_if and record_success_if(exception):
track_result_ok(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
else:
track_result_error(
duration,
function=func_name,
module=module_name,
caller_module=caller_module,
caller_function=caller_function,
)
# Reraise exception
raise exception

finally:
if context_token is not None:
caller_var.reset(context_token)
if context_token_module is not None:
caller_module_var.reset(context_token_module)
if context_token_function is not None:
caller_function_var.reset(context_token_function)

return result

async_wrapper.__doc__ = append_docs_to_docstring(func, func_name, module_name)
return async_wrapper

def pick_decorator(func: Callable) -> Callable:
def pick_decorator(func):
"""Pick the correct decorator based on the function type."""
if inspect.iscoroutinefunction(func):
return async_decorator(func)
File renamed without changes.
159 changes: 159 additions & 0 deletions src/autometrics/exposition.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
from opentelemetry.sdk.metrics.export import (
AggregationTemporality,
MetricReader,
PeriodicExportingMetricReader,
)
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from prometheus_client import start_http_server
from pydantic import ConfigDict, TypeAdapter
from typing import Dict, Literal, Optional, Union
from typing_extensions import TypedDict

# GRPC is optional so we'll only type it if it's available
try:
from grpc import ChannelCredentials # type: ignore
except ImportError:
ChannelCredentials = None


# All of these are split into two parts because having
# a wall of Optional[...] is not very readable and Required[...] is 3.11+
class OtlpGrpcExporterBase(TypedDict):
"""Base type for OTLP GRPC exporter configuration."""

type: Literal["otlp-proto-grpc"]


class OtlpGrpcExporterOptions(OtlpGrpcExporterBase, total=False):
"""Configuration for OTLP GRPC exporter."""

__pydantic_config__ = ConfigDict(arbitrary_types_allowed=True) # type: ignore
endpoint: str
insecure: bool
headers: Dict[str, str]
credentials: ChannelCredentials
push_interval: int
timeout: int
preferred_temporality: Dict[type, AggregationTemporality]


OtlpGrpcExporterValidator = TypeAdapter(OtlpGrpcExporterOptions)


class OtlpHttpExporterBase(TypedDict):
"""Base type for OTLP HTTP exporter configuration."""

type: Literal["otlp-proto-http"]


class OtlpHttpExporterOptions(OtlpHttpExporterBase, total=False):
"""Configuration for OTLP HTTP exporter."""

endpoint: str
headers: Dict[str, str]
push_interval: int
timeout: int
preferred_temporality: Dict[type, AggregationTemporality]


OtlpHttpExporterValidator = TypeAdapter(OtlpHttpExporterOptions)


class PrometheusExporterBase(TypedDict):
"""Base type for OTLP Prometheus exporter configuration."""

type: Literal["prometheus"]


class PrometheusExporterOptions(PrometheusExporterBase, total=False):
"""Configuration for Prometheus exporter."""

address: str
port: int


PrometheusValidator = TypeAdapter(PrometheusExporterOptions)


class OtelCustomExporterBase(TypedDict):
"""Base type for OTLP Prometheus exporter configuration."""

type: Literal["otel-custom"]


class OtelCustomExporterOptions(OtelCustomExporterBase, total=False):
"""Configuration for OpenTelemetry Prometheus exporter."""

__pydantic_config__ = ConfigDict(arbitrary_types_allowed=True) # type: ignore
exporter: MetricReader


OtelCustomValidator = TypeAdapter(OtelCustomExporterOptions)


ExporterOptions = Union[
OtlpGrpcExporterOptions,
OtlpHttpExporterOptions,
PrometheusExporterOptions,
OtelCustomExporterOptions,
]


def create_exporter(config: ExporterOptions) -> Optional[MetricReader]:
"""Create an exporter based on the configuration."""
if config["type"] == "prometheus":
config = PrometheusValidator.validate_python(config)
start_http_server(
config.get("port", 9464),
config.get("address", "0.0.0.0"),
)
return PrometheusMetricReader()
if config["type"] == "otlp-proto-http":
config = OtlpHttpExporterValidator.validate_python(config)
try:
from opentelemetry.exporter.otlp.proto.http.metric_exporter import (
OTLPMetricExporter as OTLPHTTPMetricExporter,
)

http_exporter = OTLPHTTPMetricExporter(
endpoint=config.get("endpoint", None),
headers=config.get("headers", None),
timeout=config.get("timeout", None),
preferred_temporality=config.get("preferred_temporality", {}),
)
http_reader = PeriodicExportingMetricReader(
http_exporter,
export_interval_millis=config.get("push_interval", None),
export_timeout_millis=config.get("timeout", None),
)
return http_reader
except ImportError:
raise ImportError("OTLP exporter (HTTP) not installed")
if config["type"] == "otlp-proto-grpc":
config = OtlpGrpcExporterValidator.validate_python(config)
try:
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import ( # type: ignore
OTLPMetricExporter as OTLPGRPCMetricExporter,
)

grpc_exporter = OTLPGRPCMetricExporter(
endpoint=config.get("endpoint", None),
insecure=config.get("insecure", None),
credentials=config.get("credentials", None),
headers=config.get("headers", None),
timeout=config.get("timeout", None),
preferred_temporality=config.get("preferred_temporality", {}),
)
grpc_reader = PeriodicExportingMetricReader(
grpc_exporter,
export_interval_millis=config.get("push_interval", None),
export_timeout_millis=config.get("timeout", None),
)
return grpc_reader
except ImportError:
raise ImportError("OTLP exporter (GRPC) not installed")
if config["type"] == "otel-custom":
config = OtelCustomValidator.validate_python(config)
return config.get("exporter", None)
else:
raise ValueError("Invalid exporter type")
38 changes: 38 additions & 0 deletions src/autometrics/initialization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import logging
import os

from typing_extensions import Unpack


from .tracker import init_tracker, get_tracker
from .tracker.temporary import TemporaryTracker
from .settings import AutometricsOptions, init_settings

has_inited = False
DOUBLE_INIT_ERROR = "Cannot call init() more than once."
NOT_TEMP_TRACKER_ERROR = "Expected tracker to be TemporaryTracker."


def init(**kwargs: Unpack[AutometricsOptions]):
"""Initialization function that is used to configure autometrics. This function should be called
immediately after starting your app. You cannot call this function more than once.
"""
global has_inited
if has_inited:
if os.environ.get("AUTOMETRICS_DEBUG") == "true":
raise RuntimeError(DOUBLE_INIT_ERROR)
else:
logging.warn(f"{DOUBLE_INIT_ERROR} This init() call will be ignored.")
return
has_inited = True

temp_tracker = get_tracker()
if not isinstance(temp_tracker, TemporaryTracker):
if os.environ.get("AUTOMETRICS_DEBUG") == "true":
raise RuntimeError(NOT_TEMP_TRACKER_ERROR)
else:
logging.warn(f"{NOT_TEMP_TRACKER_ERROR} This init() call will be ignored.")
return
settings = init_settings(**kwargs)
tracker = init_tracker(settings["tracker"], settings)
temp_tracker.replay_queue(tracker)
9 changes: 9 additions & 0 deletions src/autometrics/objectives.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
import logging

from enum import Enum
from re import match
from typing import Optional, Tuple


@@ -89,3 +92,9 @@ def __init__(
self.name = name
self.success_rate = success_rate
self.latency = latency

# Check that name only contains alphanumeric characters and hyphens
if match(r"^[\w-]+$", name) is None:
logging.getLogger().warning(
f"Objective name '{name}' contains invalid characters. Only alphanumeric characters and hyphens are allowed."
)
Empty file added src/autometrics/py.typed
Empty file.
125 changes: 125 additions & 0 deletions src/autometrics/settings.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
import os

from typing import cast, Dict, List, TypedDict, Optional, Any
from typing_extensions import Unpack

from .tracker.types import TrackerType
from .exposition import ExporterOptions
from .objectives import ObjectiveLatency
from .utils import extract_repository_provider, read_repository_url_from_fs


class AutometricsSettings(TypedDict):
"""Settings for autometrics."""

histogram_buckets: List[float]
tracker: TrackerType
exporter: Optional[ExporterOptions]
enable_exemplars: bool
service_name: str
commit: str
version: str
branch: str
repository_url: str
repository_provider: str


class AutometricsOptions(TypedDict, total=False):
"""User supplied overrides for autometrics settings."""

histogram_buckets: List[float]
tracker: str
exporter: Dict[str, Any]
enable_exemplars: bool
service_name: str
commit: str
version: str
branch: str
repository_url: str
repository_provider: str


def get_objective_boundaries():
"""Get the objective latency boundaries as float values in seconds (instead of strings)"""
return list(map(lambda c: float(c.value), ObjectiveLatency))


settings: Optional[AutometricsSettings] = None


def init_settings(**overrides: Unpack[AutometricsOptions]) -> AutometricsSettings:
tracker_setting = (
overrides.get("tracker") or os.getenv("AUTOMETRICS_TRACKER") or "opentelemetry"
)
tracker_type = (
TrackerType.PROMETHEUS
if tracker_setting.lower() == "prometheus"
else TrackerType.OPENTELEMETRY
)

exporter: Optional[ExporterOptions] = None
exporter_option = overrides.get("exporter")
if exporter_option:
exporter = cast(ExporterOptions, exporter_option)

repository_url: Optional[str] = overrides.get(
"repository_url", os.getenv("AUTOMETRICS_REPOSITORY_URL")
)
if repository_url is None:
repository_url = read_repository_url_from_fs()

repository_provider: Optional[str] = overrides.get(
"repository_provider", os.getenv("AUTOMETRICS_REPOSITORY_PROVIDER")
)
if repository_provider is None and repository_url is not None:
repository_provider = extract_repository_provider(repository_url)

config: AutometricsSettings = {
"histogram_buckets": overrides.get("histogram_buckets")
or get_objective_boundaries(),
"enable_exemplars": overrides.get(
"enable_exemplars", os.getenv("AUTOMETRICS_EXEMPLARS") == "true"
),
"tracker": tracker_type,
"exporter": exporter,
"service_name": overrides.get(
"service_name",
os.getenv(
"AUTOMETRICS_SERVICE_NAME",
os.getenv("OTEL_SERVICE_NAME", __package__.rsplit(".", 1)[0]),
),
),
"commit": overrides.get(
"commit", os.getenv("AUTOMETRICS_COMMIT", os.getenv("COMMIT_SHA", ""))
),
"branch": overrides.get(
"branch", os.getenv("AUTOMETRICS_BRANCH", os.getenv("BRANCH_NAME", ""))
),
"version": overrides.get("version", os.getenv("AUTOMETRICS_VERSION", "")),
"repository_url": repository_url or "",
"repository_provider": repository_provider or "",
}
validate_settings(config)

global settings
settings = config
return settings


def get_settings() -> AutometricsSettings:
"""Get the current settings."""
global settings
if settings is None:
settings = init_settings()
return settings


def validate_settings(settings: AutometricsSettings):
"""Ensure that the settings are valid. For example, we don't support OpenTelemetry exporters with Prometheus tracker."""
if settings["exporter"]:
exporter_type = settings["exporter"]["type"]
if settings["tracker"] == TrackerType.PROMETHEUS:
if exporter_type != "prometheus":
raise ValueError(
f"Exporter type {exporter_type} is not supported with Prometheus tracker."
)
4 changes: 3 additions & 1 deletion src/autometrics/test_caller.py
Original file line number Diff line number Diff line change
@@ -3,10 +3,12 @@
from prometheus_client.exposition import generate_latest

from .decorator import autometrics
from .initialization import init


def test_caller_detection():
"""This is a test to see if the caller is properly detected."""
init()

def dummy_decorator(func):
@wraps(func)
@@ -38,6 +40,6 @@ def bar():
assert blob is not None
data = blob.decode("utf-8")

expected = """function_calls_count_total{caller="test_caller_detection.<locals>.bar",function="test_caller_detection.<locals>.foo",module="autometrics.test_caller",objective_name="",objective_percentile="",result="ok"} 1.0"""
expected = """function_calls_total{caller_function="test_caller_detection.<locals>.bar",caller_module="autometrics.test_caller",function="test_caller_detection.<locals>.foo",module="autometrics.test_caller",objective_name="",objective_percentile="",result="ok",service_name="autometrics"} 1.0"""
assert "wrapper" not in data
assert expected in data
214 changes: 176 additions & 38 deletions src/autometrics/test_decorator.py

Large diffs are not rendered by default.

190 changes: 190 additions & 0 deletions src/autometrics/test_initialization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
import pytest

from autometrics import init
from autometrics.exposition import PrometheusExporterOptions
from autometrics.tracker.opentelemetry import OpenTelemetryTracker
from autometrics.tracker.prometheus import PrometheusTracker
from autometrics.tracker.tracker import get_tracker
from autometrics.tracker.types import TrackerType
from autometrics.settings import get_settings


def test_init():
"""Test that the default settings are set correctly"""
init()
settings = get_settings()
assert settings == {
"histogram_buckets": [
0.005,
0.01,
0.025,
0.05,
0.075,
0.1,
0.25,
0.5,
0.75,
1.0,
2.5,
5.0,
7.5,
10.0,
],
"enable_exemplars": False,
"tracker": TrackerType.OPENTELEMETRY,
"exporter": None,
"service_name": "autometrics",
"commit": "",
"branch": "",
"version": "",
"repository_url": "git@github.com:autometrics-dev/autometrics-py.git",
"repository_provider": "github",
}
tracker = get_tracker()
assert isinstance(tracker, OpenTelemetryTracker)


def test_init_custom():
"""Test that setting custom settings works correctly"""
init(
tracker="prometheus",
service_name="test",
enable_exemplars=True,
version="1.0.0",
commit="123456",
branch="main",
)
settings = get_settings()
assert settings == {
"histogram_buckets": [
0.005,
0.01,
0.025,
0.05,
0.075,
0.1,
0.25,
0.5,
0.75,
1.0,
2.5,
5.0,
7.5,
10.0,
],
"enable_exemplars": True,
"tracker": TrackerType.PROMETHEUS,
"exporter": None,
"service_name": "test",
"commit": "123456",
"branch": "main",
"version": "1.0.0",
"repository_url": "git@github.com:autometrics-dev/autometrics-py.git",
"repository_provider": "github",
}
tracker = get_tracker()
assert isinstance(tracker, PrometheusTracker)


def test_init_env_vars(monkeypatch):
"""Test that setting custom settings via environment variables works correctly"""
monkeypatch.setenv("AUTOMETRICS_TRACKER", "prometheus")
monkeypatch.setenv("AUTOMETRICS_SERVICE_NAME", "test")
monkeypatch.setenv("AUTOMETRICS_EXEMPLARS", "true")
monkeypatch.setenv("AUTOMETRICS_VERSION", "1.0.0")
monkeypatch.setenv("AUTOMETRICS_COMMIT", "123456")
monkeypatch.setenv("AUTOMETRICS_BRANCH", "main")
init()
settings = get_settings()

assert settings == {
"histogram_buckets": [
0.005,
0.01,
0.025,
0.05,
0.075,
0.1,
0.25,
0.5,
0.75,
1.0,
2.5,
5.0,
7.5,
10.0,
],
"enable_exemplars": True,
"tracker": TrackerType.PROMETHEUS,
"exporter": None,
"service_name": "test",
"commit": "123456",
"branch": "main",
"version": "1.0.0",
"repository_url": "git@github.com:autometrics-dev/autometrics-py.git",
"repository_provider": "github",
}


def test_double_init():
"""Test that calling init twice fails"""
init()
with pytest.raises(RuntimeError):
init()


def test_init_with_exporter():
"""Test that setting exporter works correctly"""
init(
tracker="prometheus",
exporter={
"type": "prometheus",
},
)
settings = get_settings()
assert settings == {
"histogram_buckets": [
0.005,
0.01,
0.025,
0.05,
0.075,
0.1,
0.25,
0.5,
0.75,
1.0,
2.5,
5.0,
7.5,
10.0,
],
"enable_exemplars": False,
"tracker": TrackerType.PROMETHEUS,
"exporter": PrometheusExporterOptions(type="prometheus"),
"service_name": "autometrics",
"commit": "",
"branch": "",
"version": "",
"repository_url": "git@github.com:autometrics-dev/autometrics-py.git",
"repository_provider": "github",
}
tracker = get_tracker()
assert isinstance(tracker, PrometheusTracker)


def test_init_exporter_validation():
with pytest.raises(ValueError):
init(
tracker="prometheus",
exporter={
"type": "otel-custom",
},
)


def test_init_repo_meta_suppress_detection():
init(repository_url="", repository_provider="")
settings = get_settings()
assert settings["repository_provider"] is ""
assert settings["repository_url"] is ""
16 changes: 16 additions & 0 deletions src/autometrics/test_objectives.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import logging

from autometrics.objectives import Objective


def test_objective_name_warning(caplog):
"""Test that a warning is logged when an objective name contains invalid characters."""
caplog.set_level(logging.WARNING)
caplog.clear()
Objective("Incorrect name.")
assert len(caplog.records) == 1
assert caplog.records[0].levelname == "WARNING"
assert "contains invalid characters" in caplog.records[0].message
caplog.clear()
Objective("correct-name-123")
assert len(caplog.records) == 0
55 changes: 55 additions & 0 deletions src/autometrics/test_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import pytest

from autometrics.utils import get_repository_url

config1 = """
[core]
key = value
[remote "origin"]
key2 = value2
url = https://github.com/autometrics/autometrics-py.git
key3 = value3
[branch "main"]
some-key = some-value
"""

config2 = """
[core]
key = value
"""

config3 = """
[core]
key = value
[remote.origin]
key2 = value2
url = ssh://git@github.com:autometrics-dev/autometrics-py.git
key3 = value3
"""

config4 = """
[remote "upstream"]
url = "git@autometrics.dev/autometrics-ts.git"
[remote "origin"]
url = "git@github.com:autometrics-dev/autometrics-py.git"
"""


@pytest.fixture(
params=[
(config1, "https://github.com/autometrics/autometrics-py.git"),
(config2, None),
(config3, "ssh://git@github.com:autometrics-dev/autometrics-py.git"),
(config4, "git@github.com:autometrics-dev/autometrics-py.git"),
]
)
def git_config(request):
return request.param


def test_read_repository_url(monkeypatch, git_config):
"""Test that the repository url is read correctly from git config."""
(config, expected_url) = git_config
url = get_repository_url(config)
assert url == expected_url
1 change: 1 addition & 0 deletions src/autometrics/tracker/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
from .tracker import *
from .types import *
114 changes: 86 additions & 28 deletions src/autometrics/tracker/opentelemetry.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,25 @@
import time
from typing import Optional
from typing import Dict, Optional, Mapping

from opentelemetry.exporter.prometheus import PrometheusMetricReader
from opentelemetry.metrics import (
Meter,
Counter,
Histogram,
UpDownCounter,
set_meter_provider,
)
from opentelemetry.semconv.resource import ResourceAttributes
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.view import View, ExplicitBucketHistogramAggregation
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from opentelemetry.sdk.metrics.export import MetricReader
from opentelemetry.sdk.resources import Resource
from opentelemetry.util.types import AttributeValue

from .exemplar import get_exemplar
from .tracker import Result
from ..exemplar import get_exemplar
from .types import Result
from ..objectives import Objective, ObjectiveLatency
from ..constants import (
AUTOMETRICS_VERSION,
CONCURRENCY_NAME,
CONCURRENCY_DESCRIPTION,
COUNTER_DESCRIPTION,
@@ -24,17 +28,27 @@
HISTOGRAM_NAME,
BUILD_INFO_NAME,
BUILD_INFO_DESCRIPTION,
REPOSITORY_PROVIDER,
REPOSITORY_URL,
SERVICE_NAME,
OBJECTIVE_NAME,
OBJECTIVE_PERCENTILE,
OBJECTIVE_LATENCY_THRESHOLD,
SPEC_VERSION,
)
from ..settings import get_settings

FUNCTION_CALLS_DURATION_NAME = "function.calls.duration"
LabelValue = AttributeValue
Attributes = Dict[str, LabelValue]


def get_objective_boundaries():
"""Get the objective latency boundaries as float values in seconds (instead of strings)"""
return list(map(lambda c: float(c.value), ObjectiveLatency))
def get_resource_attrs() -> Attributes:
attrs: Attributes = {}
if get_settings()["service_name"] is not None:
attrs[ResourceAttributes.SERVICE_NAME] = get_settings()["service_name"]
if get_settings()["version"] is not None:
attrs[ResourceAttributes.SERVICE_VERSION] = get_settings()["version"]
return attrs


class OpenTelemetryTracker:
@@ -45,17 +59,22 @@ class OpenTelemetryTracker:
__up_down_counter_build_info_instance: UpDownCounter
__up_down_counter_concurrency_instance: UpDownCounter

def __init__(self):
exporter = PrometheusMetricReader("")
def __init__(self, reader: Optional[MetricReader] = None):
view = View(
name=HISTOGRAM_NAME,
description=HISTOGRAM_DESCRIPTION,
instrument_name=HISTOGRAM_NAME,
aggregation=ExplicitBucketHistogramAggregation(
boundaries=get_objective_boundaries()
boundaries=get_settings()["histogram_buckets"]
),
)
meter_provider = MeterProvider(metric_readers=[exporter], views=[view])
resource = Resource.create(get_resource_attrs())
readers = [reader or PrometheusMetricReader()]
meter_provider = MeterProvider(
views=[view],
resource=resource,
metric_readers=readers,
)
set_meter_provider(meter_provider)
meter = meter_provider.get_meter(name="autometrics")
self.__counter_instance = meter.create_counter(
@@ -64,6 +83,7 @@ def __init__(self):
self.__histogram_instance = meter.create_histogram(
name=HISTOGRAM_NAME,
description=HISTOGRAM_DESCRIPTION,
unit="seconds",
)
self.__up_down_counter_build_info_instance = meter.create_up_down_counter(
name=BUILD_INFO_NAME,
@@ -79,7 +99,8 @@ def __count(
self,
function: str,
module: str,
caller: str,
caller_module: str,
caller_function: str,
objective: Optional[Objective],
exemplar: Optional[dict],
result: Result,
@@ -97,22 +118,22 @@ def __count(
"function": function,
"module": module,
"result": result.value,
"caller": caller,
"caller.module": caller_module,
"caller.function": caller_function,
OBJECTIVE_NAME: objective_name,
OBJECTIVE_PERCENTILE: percentile,
SERVICE_NAME: get_settings()["service_name"],
},
)

def __histogram(
self,
function: str,
module: str,
start_time: float,
duration: float,
objective: Optional[Objective],
exemplar: Optional[dict],
):
duration = time.time() - start_time

objective_name = "" if objective is None else objective.name
latency = None if objective is None else objective.latency
percentile = ""
@@ -127,6 +148,7 @@ def __histogram(
attributes={
"function": function,
"module": module,
SERVICE_NAME: get_settings()["service_name"],
OBJECTIVE_NAME: objective_name,
OBJECTIVE_PERCENTILE: percentile,
OBJECTIVE_LATENCY_THRESHOLD: threshold,
@@ -142,11 +164,18 @@ def set_build_info(self, commit: str, version: str, branch: str):
"commit": commit,
"version": version,
"branch": branch,
SERVICE_NAME: get_settings()["service_name"],
REPOSITORY_URL: get_settings()["repository_url"],
REPOSITORY_PROVIDER: get_settings()["repository_provider"],
AUTOMETRICS_VERSION: SPEC_VERSION,
},
)

def start(
self, function: str, module: str, track_concurrency: Optional[bool] = False
self,
function: str,
module: str,
track_concurrency: Optional[bool] = False,
):
"""Start tracking metrics for a function call."""
if track_concurrency:
@@ -155,34 +184,44 @@ def start(
attributes={
"function": function,
"module": module,
SERVICE_NAME: get_settings()["service_name"],
},
)

def finish(
self,
start_time: float,
duration: float,
function: str,
module: str,
caller: str,
caller_module: str,
caller_function: str,
result: Result = Result.OK,
objective: Optional[Objective] = None,
track_concurrency: Optional[bool] = False,
):
"""Finish tracking metrics for a function call."""

exemplar = None
# Currently, exemplars are only supported by prometheus-client
# https://github.com/autometrics-dev/autometrics-py/issues/41
# if os.getenv("AUTOMETRICS_EXEMPLARS") == "true":
# if get_settings()["exemplars"]:
# exemplar = get_exemplar()
self.__count(function, module, caller, objective, exemplar, result)
self.__histogram(function, module, start_time, objective, exemplar)
self.__count(
function,
module,
caller_module,
caller_function,
objective,
exemplar,
result,
)
self.__histogram(function, module, duration, objective, exemplar)
if track_concurrency:
self.__up_down_counter_concurrency_instance.add(
-1.0,
attributes={
"function": function,
"module": module,
SERVICE_NAME: get_settings()["service_name"],
},
)

@@ -193,6 +232,25 @@ def initialize_counters(
objective: Optional[Objective] = None,
):
"""Initialize tracking metrics for a function call at zero."""
caller = ""
self.__count(function, module, caller, objective, None, Result.OK, 0)
self.__count(function, module, caller, objective, None, Result.ERROR, 0)
caller_module = ""
caller_function = ""
self.__count(
function,
module,
caller_module,
caller_function,
objective,
None,
Result.OK,
0,
)
self.__count(
function,
module,
caller_module,
caller_function,
objective,
None,
Result.ERROR,
0,
)
114 changes: 93 additions & 21 deletions src/autometrics/tracker/prometheus.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
import os
import time
from typing import Optional
from prometheus_client import Counter, Histogram, Gauge

from ..constants import (
AUTOMETRICS_VERSION_PROMETHEUS,
COUNTER_NAME_PROMETHEUS,
HISTOGRAM_NAME_PROMETHEUS,
CONCURRENCY_NAME_PROMETHEUS,
REPOSITORY_PROVIDER_PROMETHEUS,
REPOSITORY_URL_PROMETHEUS,
SERVICE_NAME_PROMETHEUS,
BUILD_INFO_NAME,
COUNTER_DESCRIPTION,
HISTOGRAM_DESCRIPTION,
@@ -16,13 +19,15 @@
OBJECTIVE_PERCENTILE_PROMETHEUS,
OBJECTIVE_LATENCY_THRESHOLD_PROMETHEUS,
COMMIT_KEY,
SPEC_VERSION,
VERSION_KEY,
BRANCH_KEY,
)

from .exemplar import get_exemplar
from .tracker import Result
from ..exemplar import get_exemplar
from .types import Result
from ..objectives import Objective
from ..settings import get_settings


class PrometheusTracker:
@@ -34,8 +39,10 @@ class PrometheusTracker:
[
"function",
"module",
SERVICE_NAME_PROMETHEUS,
"result",
"caller",
"caller_module",
"caller_function",
OBJECTIVE_NAME_PROMETHEUS,
OBJECTIVE_PERCENTILE_PROMETHEUS,
],
@@ -46,16 +53,35 @@ class PrometheusTracker:
[
"function",
"module",
SERVICE_NAME_PROMETHEUS,
OBJECTIVE_NAME_PROMETHEUS,
OBJECTIVE_PERCENTILE_PROMETHEUS,
OBJECTIVE_LATENCY_THRESHOLD_PROMETHEUS,
],
buckets=get_settings()["histogram_buckets"],
unit="seconds",
)
prom_gauge_build_info = Gauge(
BUILD_INFO_NAME, BUILD_INFO_DESCRIPTION, [COMMIT_KEY, VERSION_KEY, BRANCH_KEY]
BUILD_INFO_NAME,
BUILD_INFO_DESCRIPTION,
[
COMMIT_KEY,
VERSION_KEY,
BRANCH_KEY,
SERVICE_NAME_PROMETHEUS,
REPOSITORY_URL_PROMETHEUS,
REPOSITORY_PROVIDER_PROMETHEUS,
AUTOMETRICS_VERSION_PROMETHEUS,
],
)
prom_gauge_concurrency = Gauge(
CONCURRENCY_NAME_PROMETHEUS, CONCURRENCY_DESCRIPTION, ["function", "module"]
CONCURRENCY_NAME_PROMETHEUS,
CONCURRENCY_DESCRIPTION,
[
"function",
"module",
SERVICE_NAME_PROMETHEUS,
],
)

def __init__(self) -> None:
@@ -65,7 +91,8 @@ def _count(
self,
func_name: str,
module_name: str,
caller: str,
caller_module: str,
caller_function: str,
objective: Optional[Objective] = None,
exemplar: Optional[dict] = None,
result: Result = Result.OK,
@@ -78,12 +105,15 @@ def _count(
if objective is None or objective.success_rate is None
else objective.success_rate.value
)
service_name = get_settings()["service_name"]

self.prom_counter.labels(
func_name,
module_name,
service_name,
result.value,
caller,
caller_module,
caller_function,
objective_name,
percentile,
).inc(inc_by, exemplar)
@@ -92,12 +122,11 @@ def _histogram(
self,
func_name: str,
module_name: str,
start_time: float,
duration: float,
objective: Optional[Objective] = None,
exemplar: Optional[dict] = None,
):
"""Observe the duration of the function call."""
duration = time.time() - start_time

objective_name = "" if objective is None else objective.name
latency = None if objective is None else objective.latency
@@ -106,10 +135,12 @@ def _histogram(
if latency is not None:
threshold = latency[0].value
percentile = latency[1].value
service_name = get_settings()["service_name"]

self.prom_histogram.labels(
func_name,
module_name,
service_name,
objective_name,
percentile,
threshold,
@@ -118,35 +149,57 @@ def _histogram(
def set_build_info(self, commit: str, version: str, branch: str):
if not self._has_set_build_info:
self._has_set_build_info = True
self.prom_gauge_build_info.labels(commit, version, branch).set(1)
service_name = get_settings()["service_name"]
repository_url = get_settings()["repository_url"]
repository_provider = get_settings()["repository_provider"]
self.prom_gauge_build_info.labels(
commit,
version,
branch,
service_name,
repository_url,
repository_provider,
SPEC_VERSION,
).set(1)

def start(
self, function: str, module: str, track_concurrency: Optional[bool] = False
):
"""Start tracking metrics for a function call."""
if track_concurrency:
self.prom_gauge_concurrency.labels(function, module).inc()
service_name = get_settings()["service_name"]
self.prom_gauge_concurrency.labels(function, module, service_name).inc()

def finish(
self,
start_time: float,
duration: float,
function: str,
module: str,
caller: str,
caller_module: str,
caller_function: str,
result: Result = Result.OK,
objective: Optional[Objective] = None,
track_concurrency: Optional[bool] = False,
):
"""Finish tracking metrics for a function call."""
exemplar = None
if os.getenv("AUTOMETRICS_EXEMPLARS") == "true":
if get_settings()["enable_exemplars"]:
exemplar = get_exemplar()

self._count(function, module, caller, objective, exemplar, result)
self._histogram(function, module, start_time, objective, exemplar)
self._count(
function,
module,
caller_module,
caller_function,
objective,
exemplar,
result,
)
self._histogram(function, module, duration, objective, exemplar)

if track_concurrency:
self.prom_gauge_concurrency.labels(function, module).dec()
service_name = get_settings()["service_name"]
self.prom_gauge_concurrency.labels(function, module, service_name).dec()

def initialize_counters(
self,
@@ -155,6 +208,25 @@ def initialize_counters(
objective: Optional[Objective] = None,
):
"""Initialize tracking metrics for a function call at zero."""
caller = ""
self._count(function, module, caller, objective, None, Result.OK, 0)
self._count(function, module, caller, objective, None, Result.ERROR, 0)
caller_module = ""
caller_function = ""
self._count(
function,
module,
caller_module,
caller_function,
objective,
None,
Result.OK,
0,
)
self._count(
function,
module,
caller_module,
caller_function,
objective,
None,
Result.ERROR,
0,
)
Loading