Service Data Sources

In addition to the data sources you upload, Tinybird provides other “service data sources” that allow to inspect what’s going on in your account. For example, you can get real time stats about API calls or a log of every operation over your Data Sources.

It’s the same than using system tables in any database but in this case contains information about the service usage.

These are the available tables:


Contains stats about your snapshot views


Contains stats about your Pipe endpoints API calls aggregated per day.

  • date (Date): view date

  • pipe_id (String): pipe id as returned in Pipes API.

  • view_count: view count

  • error_count: number of request with error

  • avg_duration_state: avg duration state (see an example on how to query state columns below)

  • quantile_timing_state: 0.9, 0.95 and 0.99 quantiles state. Time in milliseconds

Querying tables with state columns is easy. State is an intermediate aggregated state and to get the final value you just need to use the aggregation method plus -State. See the following example:

how to get stats for a particular pipe
  sum(view_count) view_count,
  sum(error_count) error_count,
  avgMerge(avg_duration_state) avg_time,
  quantilesTimingMerge(0.9, 0.95, 0.99)(quantile_timing_state) quantiles_timing_in_millis_array
FROM tinybird.pipe_stats
where pipe_id = 'PIPE_ID'
group by date


Same information that tinybird.pipe_stats but in real time and without aggregation, every single request is logged into this table. This table removes elements after 2 days so if you want to do operations with data older than 2 days you should use the aggregated version mentioned earlier.

  • start_datetime (DateTime): API call started

  • pipe_id (String): Pipe id as returned in Pipes API.

  • duration (Float): API call duration in seconds

  • url (String): url (token param is removed for security reasons)

  • error (Boolean): true if query returned error.


The Data Source contains details about how we ingested data into your Data Sources. We do ingestion in chunks. For instance, when we ingest from a URL, if it is possible, we split the download in several requests resulting in different ingestion blocks. The same happens when the data upload happens with a multipart request: we ingest the data in several chunks as the data is arriving. You can use this Service Data Source to spot problematic parts of your data.

  • timestamp DateTime: when the block was ingested.

  • import_id String: the id of the import operation

  • job_id Nullable(String): the Job id that ingested the block of data, if it was ingested by URL. In this case, the import_id and the job_id must have the same value.

  • request_id String: the id of the request that performed the operation. In this case, the import_id and the job_id must have the same value.

  • source String: Either the URL or stream or body keywords.

  • block_id String: the block identifier, you can cross this with the blocks_ids column from the tinybird.datasources_ops_log Service Data Source.

  • status String: done or error.

  • datasource_id String: The Data Source consistent id.

  • datasource_name String: The Data Source name when the block was ingested.

  • start_offset Nullable(Int64): The starting byte of the block, if the ingestion was split, where this block started.

  • end_offset Nullable(Int64): If split, the ending byte of the block.

  • rows Nullable(Int32): how many rows it ingested.

  • parser Nullable(String): either if we use the native block parser or we had to fallback to row by row parsing.

  • quarantine_lines Nullable(UInt32): If any, how many rows got into the quarantine Data Source.

  • empty_lines Nullable(UInt32): If any, how many empty lines were skipped.

  • bytes Nullable(UInt32): How many bytes the block had.

  • processing_time Nullable(Float32): How long it took.

  • processing_error Nullable(String): Detailed message in case of error.


Contains all operations performed to your Data Sources. We track the following operations: create, append, replace, delete, truncate, and rename. We log all of them with extra information in this Data Source.

  • timestamp DateTime: When the operation happened.

  • event_type String: The operation type.

  • datasource_id String: The id of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes.

  • datasource_name String: The name of your Data Source when the operation happened.

  • result String: Either ok or error.

  • elapsed_time Float32: How much time the operation took.

  • error Nullable(String): Detailed error message if the result was error.

  • import_id Nullable(String): The id of the import operation, if data has been ingested using one of the following operations: create, append or replace

  • job_id Nullable(String): The id of the job that performed the operation, if any. If data has been ingested, the import_id and the job_id must have the same value.

  • request_id String: The id of the request that performed the operation. If data has been ingested, the import_id and the request_id must have the same value.

  • rows Nullable(UInt64): How many rows the operations affected. This depends on the event_type: for the append event, how many rows got inserted; for the delete or truncate events, how many rows the Data Source had; for the replace type, how many rows the Data Source has after the operation.

  • rows_quarantine Nullable(UInt64): If any, how many rows got into the quarantine Data Source.

  • blocks_ids Array(String): The list of blocks ids that we used for the operation. See the tinybird.block_log Service Data Source for more details.

  • Options Nested(Names String, Values String): We store key-value pairs with extra information for some of the operations. For the replace event, we use the rows_before_replace key to track how many rows the Data Source had before the replacement happened, the replace_condition key shows what condition was used. For the append and replace events we store the data source, e.g. the URL, or body/stream keywords. For the rename event, the old_name and the new_name.