You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For paginated resources, there's always a per-page limit that's either the limit requested by the client, the default limit provided by the server, or the maximum limit supported by the server. We provide RequestContext::page_limit() to choose the right value, but of course consumers have to actually use that. It'd be nice if we Dropshot could identify when they've done the wrong thing. That way, consumers wouldn't have to test this behavior themselves for every one of their paginated resources. We could do this if, when calling ResultsPage::new(), the consumer provided the pagination parameters. Then we could call page_limit() directly and compare that to the number of items they provided.
There are a lot of things we could choose to do if the consumer gave us the wrong number of items:
panic: this is, after all, a programmer error, and a core file this will give people the best chance of debugging it.
return a 500 response: this presumably will have a smaller blast radius for the user than panicking, but still noticeable and potentially debuggable from the log (although probably less so than a core file would be). It's still not great for clients.
log a warning and proceed anyway, returning all the items we were given: less noticeable (which is potentially bad), but also doesn't impact the consumer's users
log a warning and truncate the results to the intended limit: also least noticeable, probably more correct in that it obeys the limit.
On the one hand, it seems silly to panic or fail a request that we can otherwise process correctly. But allowing these requests to complete defeats the whole purpose of pagination: among the reasons we use pagination are to bound the amount of work an operation does and the amount of time it takes, in turn to facilitate scalability and availability and mitigate DoS. If we get here because a consumer forgets to append "LIMIT N" to a SQL query, and Dropshot is basically silent about it, the user may get by for a long time, only to discover after months when they dig into some pathological performance problem that this has been acting like an unpaginated API (and they've got a DoS vector in their application).
We could make this a tunable policy but I think we may as well start by picking a policy we want for ourselves and doing that. I lean towards panicking or returning a 500.
This is related to #20, which also needs the limit available in ResultsPage::new().
The text was updated successfully, but these errors were encountered:
For paginated resources, there's always a per-page limit that's either the limit requested by the client, the default limit provided by the server, or the maximum limit supported by the server. We provide
RequestContext::page_limit()
to choose the right value, but of course consumers have to actually use that. It'd be nice if we Dropshot could identify when they've done the wrong thing. That way, consumers wouldn't have to test this behavior themselves for every one of their paginated resources. We could do this if, when callingResultsPage::new()
, the consumer provided the pagination parameters. Then we could callpage_limit()
directly and compare that to the number of items they provided.There are a lot of things we could choose to do if the consumer gave us the wrong number of items:
On the one hand, it seems silly to panic or fail a request that we can otherwise process correctly. But allowing these requests to complete defeats the whole purpose of pagination: among the reasons we use pagination are to bound the amount of work an operation does and the amount of time it takes, in turn to facilitate scalability and availability and mitigate DoS. If we get here because a consumer forgets to append "LIMIT N" to a SQL query, and Dropshot is basically silent about it, the user may get by for a long time, only to discover after months when they dig into some pathological performance problem that this has been acting like an unpaginated API (and they've got a DoS vector in their application).
We could make this a tunable policy but I think we may as well start by picking a policy we want for ourselves and doing that. I lean towards panicking or returning a 500.
This is related to #20, which also needs the limit available in
ResultsPage::new()
.The text was updated successfully, but these errors were encountered: