Skip to content

Latest commit

 

History

History
 
 

docker

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Clog: A Declarative Language for C Static Code Checkers

Clog is a declarative language for describing static code checkers for C. Clog is a dialect of Datalog and adds syntactic pattern matching over the C language. We have built Clog using the MetaDL framework and the Clang C compiler frontend. The MetaDL framework supports Datalog evaluation and syntactic patterns, while the Clang frontend provides AST facts and an AST matching mechanism.

Artifact Description

We provide the Clog artifact as a Docker image. The artifact contains the Clog implementation, the evaluation framework and the test suites we have used in our evaluation. The recommended hardware for running this artifact is a multi-core x86-64 CPU, with at least 32 GB RAM. The time needed for launching our evaluation scripts is about 5 minutes, while their total running time is typically 2-3 hours.

Getting Started Guide

Installing Docker

On Linux-based systems, install the docker command line tool. This tool may be provided by the docker.io or docker packages. Alternatively, you can follow the official installation instructions.

He have tested the artifact on Ubuntu Linux 22.04 LTS, however the Docker image should be runnable on other Linux distributions, as well as on Windows and MacOS.

Running the Docker image

Download the compressed Docker image clog23-cc24v2.tgz from https://doi.org/10.5281/zenodo.10525151 .

Check the integrity of the image. sha256sum clog23-cc24v2.tgz should report b597299a1b4e034a840954e3d90ae5f676feec6f8083866df1ea82166faf28c6, md5sum clog23-cc24v2.tgz should report a0c13b81a00c075fba2ac46120397040.

Install the docker image:

docker load -i clog23-cc24.tgz

Start the docker image:

docker run -it clog23:cc24v1

This should launch interactive shell similar to:

root@7b018976a191:/#

We expect that you will get a different host name after the @ symbol.

Running the Clog Evaluation Framework

Jump to the directory containing the Clog evaluation framework:

root@7b018976a191:/# cd /work/projects/clog-eval/

Running Clog on the Magma Test Suite

Run the Magma suite:

root@7b018976a191:/work/projects/clog-eval# ./mrun-all.sh

The results produced by running Clog on the Magma suite should be available in magma-results.txt.

root@7b018976a191:/work/projects/clog-eval# cat magma-results.txt
tool,proj,n_clang,n_clang_fixed,n_clog,n_clog_fixed,clog_time,clang_time
null_dereference.json,libpng,0,0,0,0,3.71,29.75
null_dereference.json,openssl,25,24,217,217,191.07,451.80
null_dereference.json,sqlite3,18,17,387,383,150.87,1109.65
null_dereference.json,libxml2,33,33,425,425,23.61,353.05
use_after_free.json,libpng,0,0,0,0,1.32,31.50
use_after_free.json,openssl,0,0,0,0,60.60,502.12
use_after_free.json,sqlite3,0,0,0,0,4.48,1195.48
use_after_free.json,libxml2,0,0,0,0,3.38,373.37

Running Clog on the Juliet Test Suite

Run the Juliet suite:

root@7b018976a191:/work/projects/clog-eval# ./run-all.sh

The output may contain an error message produced by bc: Runtime error (func=(main), adr=19): Divide by zero. This is safe to ignore.

The results are available in the juliet-results.tex file. The last line contains information about a checker we did not include in our evaluation and thus it should be ignored.

root@7b018976a191:/work/projects/clog-eval# cat juliet-results.tex

CWE-134 & alpha.security.taint.* & 1900 & 570 & 1278 &  780 & 0 & 42.22 & 100.00 & 30.00 & 67.26 & 48.32 & 315.12 \tabularnewline
CWE-476 & core.NullDereference & 270 & 174 & 150 &  16 & 0 & 91.57 & 100.00 & 64.44 & 55.55 & 4.65 & 6.34 \tabularnewline
CWE-78 & alpha.security.taint.* & 1520 & 0 & 1008 &  0 & 0 &  & 100.00 & 0 & 66.31 & 35.51 & 120.51 \tabularnewline
CWE-416 & unix.Malloc & 138 & 36 & 108 &  0 & 0 & 100.00 & 100.00 & 26.08 & 78.26 & 2.27 & 2.69 \tabularnewline
CWE-121 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.* & 4036 & 2132 & 240 &  6810 & 20 & 23.84 & 92.30 & 52.82 & 5.94 & 79.97 & 186.33 \tabularnewline
CWE-122 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.* & 2606 & 1174 & 240 &  3542 & 20 & 24.89 & 92.30 & 45.04 & 9.20 & 50.61 & 124.02 \tabularnewline
CWE-126 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.* & 972 & 449 & 160 &  3180 & 0 & 12.37 & 100.00 & 46.19 & 16.46 & 20.19 & 46.00 \tabularnewline
CWE-127 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.* & 1288 & 720 & 300 &  3593 & 0 & 16.69 & 100.00 & 55.90 & 23.29 & 26.45 & 72.02 \tabularnewline
CWE-124 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.* & 1288 & 720 & 300 &  3593 & 0 & 16.69 & 100.00 & 55.90 & 23.29 & 26.81 & 73.73 \tabularnewline
 & alpha.security.ArrayBound,alpha.security.ArrayBoundV2,alpha.security.taint.*,alpha.unix.cstring.OutOfBounds,security.insecureAPI.DeprecatedOrUnsafeBufferHandling & 228 & 0 & 0 &  76 & 0 & 0 &  & 0 & 0 & 2.57 & 4.22 \tabularnewline

Code Size Statistics

The script running Clog on the Juliet suite also produces the code size statistics.

root@7b018976a191:/work/projects/clog-eval# cat clog-src-stats.tex

CWE-134 & 20 & 18 & 25 \tabularnewline
CWE-476 & 14 & 31 & 35 \tabularnewline
CWE-78 & 19 & 18 & 24 \tabularnewline
CWE-416 & 13 & 23 & 28 \tabularnewline
CWE-121 & 8 & 23 & 25 \tabularnewline
CWE-122 & 8 & 23 & 25 \tabularnewline
CWE-126 & 8 & 22 & 24 \tabularnewline
CWE-127 & 9 & 17 & 20 \tabularnewline
CWE-124 & 7 & 18 & 19 \tabularnewline
 & 8 & 22 & 24 \tabularnewline

The last line contains information about a checker we did not include in our evaluation and thus it should be ignored.

Mapping the Results to the Claims

There is a 1:1 correspondence between the files produced by running our evaluation scripts and the tables in the paper:

  • Table 2. CSA and Clog results on Juliet test sets : juliet-results.tex The juliet-results.tex uses LaTeX table formatting.
  • Table 3. Predicate, rule and pattern literal counts for Clog programs : clog-src-stats.tex The clog-src-stats.tex uses LaTeX table formatting.
  • Table 4. CSA and Clog report numbers and running times on Magma test programs : magma-results.txt The magma-results.txt uses CSV formatting. Please note that the paper submitted for reviewing contains an error in Table 4. The headers of the two rightmost columns are switched, thus the column with the “CSA” header contains the results of Clog (corresponding to the clog_time header in the output file) and the column with the “Clog” header contains the results of the Clang Static Analyzer (corresponding to clang_time). We have reported this error to the paper’s referees as part of the rebuttal process.

Criteria for Interpreting the Results

For Table 2 and Table 4 we expect that the report counts match precisely. The values for running times may vary, but we expect them to be roughly proportional to the values reported in the table. For Table 3 we expect that the number of predicates, rules and pattern literals matches precisely.