zap me: ⚡️mleku@getalby.com
nostr relay built from a heavily modified fork of nbd-wtf/go-nostr and fiatjaf/relayer aimed at maximum performance, simplicity and memory efficiency.
-
a lot of other bits and pieces accumulated from nearly 8 years of working with Go, logging and run control, user data directories (windows, mac, linux, android)
-
a cleaned up and unified fork of the btcd/dcred BIP-340 signatures, including the use of bitcoin core’s BIP-340 implementation (more than 4x faster than btcd)
-
AVX/AVX2 optimized SHA256 and SIMD hex encoder
-
libsecp256k1-enabled signature and signature verification (see here)
-
efficient, mutable byte slice based hash/pubkey/signature encoding in memory (zero allocation decode from wire)
-
custom badger based event store with a garbage collector that deletes least recent once the store exceeds a specified size access, and data encoded using a more space efficient format based on the nost canonical json array event form
-
vanity npub generator that can mine a 5 letter suffix in around 15 minutes on a 6 core Ryzen 5 processor using the CGO bitcoin core signature library
-
reverse proxy tool with support for Go vanity imports and nip-05 npub DNS verification and own TLS certificates
If you just want to make it run from source, you should check out a tagged version.
The commits on these tags will explain what state the commit is at.
In general, the most stable versions are new minor tags, eg v1.2.0 or v1.23.0, and minor patch versions may not be stable and occasionally may not compile (not very often).
Go 1.24 or better is recommended. Go 1.23.1 is minimum required.
In general, the main dev
branch will build, but occasionally may not.
It is where new commits are added once they are working, mostly, and allows people to easily see ongoing activity.
IT IS NOT GUARANTEED TO BE STABLE.
Use tags to pin to a specific version. Tags are in standard Go semver pattern vX.X.X
By default, Go will usually be configured with CGO_ENABLED=1
.
This selects the use of the C library from bitcoin core, which does signatures and verifications much faster (4x and better) but complicates the build process as you have to install the library beforehand.
There is instructions in p256k/README.md for doing this.
In order to disable the use of this, you must set the environment variable CGO_ENABLED=0
and it the Go compiler will automatically revert to using the btcec based secp256k1 signatures library.
export CGO_ENABLED=0 cd cmd/realy go build .
This will build the binary and place it in cmd/realy and then you can move it where you like.
To produce a static binary, whether you use the CGO secp256k1 or disable CGO as above:
go build --ldflags '-extldflags "-static"' -o ~/bin/realy ./cmd/realy/.
will place it into your ~/bin/
directory, and it will work on any system of the same architecture with the same glibc major version (has been 2 for a long time).
The default will run the relay with default settings, which will not be what you want.
This output can be directed to the profile location to make the settings editable without manually setting them on the commandline:
realy env > $HOME/.config/realy/.env
You can now edit this file to alter the configuration.
Regarding the configuration system, this is an element of many servers that is absurdly complex, and for which reason Realy does not use a complicated scheme, a simple library that allows automatic configuration of a series of options, added a simple info print:
realy help
will show you the instructions, and the one simple extension of being able to use a standard formated .env file to configure all the options for an instance.
realy
already accepts all the standard NIPs mainly nip-01 and many other types are recognised such an NIP-42 auth messages and it uses and parses relay lists, and all that other stuff. It has maybe the most faithful implementation of NIP-42 but most clients don’t correctly implement it, or at all. Which is sad, but what can you do with stupid people?
Using websockets for everything is stupid. Only subscriptions need the capabilities that are easier accessed through sockets. So we are going to implement a simplified form for accessing nostr events that is based on the principles of RESTful interfaces.
Instead of confusing authentication as in nip-42, which nobody has implemented, this will just use nip-98 in all cases for authentication, which is just a HTTP header field containing an event that references the URL and method of the query. Whether authentication is required will be designated explicitly in the capabilities
described below.
Calls to these endpoints MUST have an Accept
header with a recognised encoding. Different values in this header value field allow for entirely separate protocols, both encoding AND API.
-
application/nostr+json
designates the use of the standard encoding, though the actual APIs may diverge from this such as segregating facets of the "filter" to be only available from a given endpoint. -
application/nostr+text
is a new format that is inspired by the standard encodings used in old protocols like SMTP, POP, NNTP and IMAP. These are optimized for human reading and composition.
Of course, to translate between these protocols will require additional complexities, and for the moment we are focusing on implementing application/nostr+json
style, as codecs for much of these elements is already existing or just plain simple JSON.
Requests to the simplified nostr protocol MUST have this header set so the relay knows what format request to expect in the request body, as well as what to format the returned data.
/api
is an unprotected endpoint, and works like nip-11
but is more sensible and relates to the RESTful endpoints described here.
The protocols available and the encoding of this message are different based on the Accept
field of the HTTP header. We will describe the application/nostr+json
versions here and implement them first.
Calling this with GET
and Accept: application/nostr+json
will return a JSON array containing arrays with the following format:
["<path>", "<url of implementing repo>", "/path/to/spec.adoc","<version in semver vX.X.X-extra>",[["<flag>","<flag option>"]]]...
-
path
means the path string after the relay URL that invokes the protocol API method, eg "/api" -
The URL refers to the HTTP Git URL, and the path is from the root of the repository (not necessarily the URL you can open on a web browser).
-
Version is the semver tag on the Git repository that contains the current documentation and reference implementation.
-
The flags are an optional array of flags specifying protocol features that are or aren’t available on this method endpoint.
From these and with a little research anyone should then be able to construct valid queries for the protocol.
note 👉
|
that this message will differ if you use a different Accept type as different encodings may have differing degrees of implementation. This above is the application/nostr+json form
|
An example of an entry from a capabilities message signifying that DMs and application specific data require auth:
["/events","https://realy.lol","/readme.adoc","v1.9.6",[["auth-required","kind=[4,1059,1060,30078]]]]
In such a case, a malicious snooper would not be able to get at an event that doesn’t match up with the auth provided for the query. They also would not be told that the relay had the event or not, the relay would just not return it. If the auth proved being party to the event, it would be returned. The attacker would only know they can’t get the event, not whether or not the relay has it. And they would know only that the relay does not return these events without privilege being proven, which is what the default should be.
/relayinfo
is an alias for returning the same as the /
endpoint (the main NIP style protocol websocket upgrading endpoint) with the Accept
HTTP header key set to application/nostr+json
and the HTTP protocol used instead of WS the relay returns the nip-11 relay information document. This is just because it’s easy to do, users of the simplified protocol should instead use /capabilities
.
/event
is the endpoint for publishing events.
This will have the requirement for nip-98
authentication in accordance with the nip-11
restricted-writes
field in limitations. It should also show in the capabilities
that it is auth-required
, which should be used to restrict access to subscribers to the relay service.
The standard OK envelope JSON will be returned, eg:
["OK",true] ["OK",false,"machine-readable: human-readable explanation"]
/events
is the endpoint for retrieving events.
Rather than use the muddled "filter" structure, this will expect a simple array of the event IDs encoded in the encoding standard, ie, for application/nostr+json
this means hexadecimal strings, in an array.
The result will be JSONL formatted events returned in what should be reverse chronological order.
/filter
is the endpoint for the main set of criteria used in a filter in standard nostr websockets.
The structure for application/nostr+json
is as follows:
{ "authors":["npubs in hex",...], "kinds":[1,2,3,...], "#a":["tag values for letter tag",...],... "since":<timestamp>, "until":<timestamp>, }
note 👉
|
there is no ids or search or limit field in here.
|
The result from this is an array of the event IDs that match the filter, in reverse chronological order. By doing this, the burden of maintaining query state is shifted to the client, who is now free to request using the events
endpoint to fetch the full events.
The capabilities flag "limit" expresses how many results will be returned, and it can be relied upon that the last event in the return has a newer timestamp than any others that may have been truncated if the limit is hit.
As mentioned above in /capabilities
if there is an auth-required restriction similar to the one described there, likewise even if otherwise not auth-required such filters will only be processed with auth, and the results only the ones that contain the pubkey that was authed to.
/fulltext
is the endpoint for making a query using words that should be processed by a full text search engine.
{ "authors":["npubs in hex",...], "kinds":[1,2,3,...], "#a":["tag values for letter tag",...],... "since":<timestamp>, "until":<timestamp>, "search":"full text search text" }
The results are the same as filter
.
The purpose of also providing the filter fields is they form the basis of the matches, and then within that set the full text can be filtered. If there is no filter fields then there can be a very large number of results, so for this endpoint in the capabilities the relay will list some limit, which should be somewhere around 1000-10000.
/relay
is an endpoint that accepts a single event that should be sent to open subscriptions.
This of course includes standard nostr websocket subscriptions as well as the ones in the next section.
According to standard nostr kind
this will, when using application/nostr+json
only accept ephemeral event kinds (20k numbered), but on other encodings it is how this behaviour is specified by clients.
Like event
this endpoint also may have access restrictions, the flag for this in capabilities
must specify.
/subscribe
is an endpoint that upgrades to a websocket if authorized, and will deliver event IDs in separate messages, in the same encoding as used to make the request.
It uses the exact same query structure as filter
as regards to the matching criteria, but the comparison is made as an event is received and then dispatched to subscribed clients. If the relay has a /fulltext
endpoint the search
field can be used and after running the filter to check the event it will then inspect the matches for fulltext matches. This does not require a fulltext index to work, as all the relay has to do is scan the event matches for the keywords.
note 👉
|
Only the event IDs are given, they must be separately fetched by the client. This enables clients to opt to defer loading them for bandwidth conservation reasons. |
realy
has full nip-98 support and there is a command line tool that is like curl
but puts the correct nostr auth event into the HTTP headers found in curdl
that can be used for these functions.
To install curdl
from source, just run go install ./cmd/curdl/.
with your current working directory at the repository root.
To use curdl
, first of all, you need to add your npub to the configuration of realy
- it can be in hex or bech32 npub format at your option, see above
To authenticate, you need to set the environment variable NOSTR_SECRET_KEY=npub1…
which expects the key to be in bech32 nsec
format. curdl
will then use this to sign the authentication event that embeds in the HTTP header.
The address to use for curdl
commands is the same as the websocket address, which by default binds to all ports on the port 3334. By default this includes 127.0.0.1/localhost. This can be reconfigured as per the previous section by editing the environment variables file or setting environment variables.
You can export everything in the event store through the default http://localhost:3334 endpoint like so:
curdl get http://localhost:3334/export > everything.jsonl
Or just all of the whitelisted users and all events with p tags with them in it:
curdl get http://localhost:3334/export/users > users.jsonl
Or just one user: (includes also matching p tags)
curdl get http://localhost:3334/export/4c800257a588a82849d049817c2bdaad984b25a45ad9f6dad66e47d3b47e3b2f > mleku.jsonl
Or several users with hyphens between the hexadecimal public keys: (ditto above)
curdl get http://localhost:3334/export/4c800257a588a82849d049817c2bdaad984b25a45ad9f6dad66e47d3b47e3b2f-454bc2771a69e30843d0fccfde6e105ff3edc5c6739983ef61042633e4a9561a > mleku_gojiberra.jsonl
And import also, to put one of these files (also nostrudel and coracle have functions to export the app database of events in jsonl). Note the post
in the command, this indicates that the filename after post
will be uploaded to the url afterwards.
curdl post nostrudel.jsonl http://localhost:3334/import
It is not necessary but you can also optionally provide the SHA256 checksum of the file after the file and before the URL:
curdl post nostrudel.jsonl DEADBEEFCAFE123455566... http://localhost:3334/import
However, if you use curdl
with other nip-98 auth capable HTTP endpoints they may require this, and you can do this conveniently like this:
curdl post nostrudel.jsonl $(sha256sum http://localhost:3334/import)
on a standard linux distribution.
This adds the "payload" key to the header with that hash in it. It does not verify it is correct.