q allows direct SQL-like queries on CSVs/TSVs (and any other tabular text files), including joins and any other SQL construct, and supports automatic detection of column types and names.
"q allows performing SQL-like statements on tabular text data, including joins and subqueries"
- Seamless multi-table SQL support, including joins. filenames are just used instead of table names (use - for stdin)
- Automatic column name and column type detection (Allows working more naturally with the data)
- Multiple parsing modes - relaxed and strict. Relaxed mode allows to easily parse semi-structured data, such as log files.
- Standard installation - RPM, Homebrew (Mac). Debian package coming soon.
- Support for quoted fields
- Full UTF-8 support (and other encodings)
- Handling of gzipped files
- Output delimiter matching and selection
- Output beautifier
- man page when installed through the RPM package
Example 1:
q -H -t "select count(distinct(uuid)) from ./clicks.csv"
Output 1:
229
Example 2:
q -H -t "select request_id,score from ./clicks.csv where score > 0.7 order by score desc limit 5"
Output 2:
2cfab5ceca922a1a2179dc4687a3b26e 1.0
f6de737b5aa2c46a3db3208413a54d64 0.986665809568
766025d25479b95a224bd614141feee5 0.977105183282
2c09058a1b82c6dbcf9dc463e73eddd2 0.703255121794
Example 3:
q -t -H "select strftime('%H:%M',date_time) hour_and_minute,count(*) from ./clicks.csv group by hour_and_minute"
Output 3:
07:00 138148
07:01 140026
07:02 121826
Usage Example 4:
q -t -H "select hashed_source_machine,count(*) from ./clicks.csv group by hashed_source_machine"
Output 4:
47d9087db433b9ba.domain.com 400000
Example 5 (total size per user/group in the /tmp subtree):
sudo find /tmp -ls | q "select c5,c6,sum(c7)/1024.0/1024 as total from - group by c5,c6 order by total desc"
Output 5:
mapred hadoop 304.00390625
root root 8.0431451797485
smith smith 4.34389972687
Example 6 (top 3 user ids with the largest number of owned processes, sorted in descending order):
Note the usage of the autodetected column name UID in the query.
ps -ef | q -H "select UID,count(*) cnt from - group by UID order by cnt desc limit 3"
Output 6:
root 152
harel 119
avahi 2
A beginner's tutorial can be found here
Current stable version is 1.3.0
.
No special requirements other than python >= 2.5 are needed.
Just run brew install q
.
Thanks @stuartcarnie for the initial homebrew formula
- Download the main q executable from here into a folder in the PATH.
- Make the file executable.
For Windows
machines, also download q.bat here into the same folder and use it to run q.
Download the RPM here here.
Install using rpm -ivh <rpm-name>
.
RPM Releases also contain a man page. Just enter man q
.
Debian packaing is in progress. In the mean time install manually. See the section below.
Have you ever stared at a text file on the screen, hoping it would have been a database so you could ask anything you want about it? I had that feeling many times, and I've finally understood that it's not the database that I want. It's the language - SQL.
SQL is a declarative language for data, and as such it allows me to define what I want without caring about how exactly it's done. This is the reason SQL is so powerful, because it treats data as data and not as bits and bytes (and chars).
The goal of this tool is to provide a bridge between the world of text files and of SQL.
You can use this gitter chat room for contacting me directly. I'm trying to be available at the chat room as much as possible.
q's basic usage is very simple:q <flags> <query>
, but it has lots of features under the hood and in the flags that can be passed to the command.
Simplest execution is q "SELECT * FROM myfile" which prints the entire file.
Complete information can be found here
Some implementation details can be found here
- No checks and bounds on data size
- Spaces in file names are not supported yet. I'm working on it.
- It is possible that some rare cases of subqueries are not supported yet. Please open an issue if you find such a case. This will be fixed once the tool performs its own full-blown SQL parsing.
- Faster reuse of previous data loading
- Allow working with external DB
- Real parsing of the SQL, allowing smarter execution of queries.
- Smarter batch insertion to the database
- Provide mechanisms beyond SELECT - INSERT and CREATE TABLE SELECT and such.
Some information regarding the rationale for this tool and related philosophy can be found here
History of changes can be found here
Any feedback/suggestions/complaints regarding this tool would be much appreciated. Contributions are most welcome as well, of course.
Harel Ben-Attia, [email protected], @harelba on Twitter