- Mar 30, 2021
-
-
Nick Terrell authored
* Switch to yearless copyright per FB policy * Fix up SPDX-License-Identifier lines in `contrib/linux-kernel` sources * Add zstd copyright/license header to the `contrib/linux-kernel` sources * Update the `tests/test-license.py` to check for yearless copyright * Improvements to `tests/test-license.py` * Check `contrib/linux-kernel` in `tests/test-license.py`
-
- Mar 25, 2021
- Jan 04, 2021
-
-
Nick Terrell authored
-
- Mar 27, 2020
-
-
Nick Terrell authored
* All copyright lines now have -2020 instead of -present * All copyright lines include "Facebook, Inc" * All licenses are now standardized The copyright in `threading.{h,c}` is not changed because it comes from zstdmt. The copyright and license of `divsufsort.{h,c}` is not changed.
-
- Oct 22, 2019
-
-
Nick Terrell authored
* A copy-paste error made it so we weren't running the advanced/cdict streaming tests with the old API. * Clean up the old streaming tests to skip incompatible configs. * Update `results.csv`. The tests now catch the bug in #1787.
-
- Mar 21, 2019
-
-
Nick Terrell authored
* Test all of the `ZSTD_initCStream*()` variants. * Fix a typo in the zstdcli method.
-
- Feb 15, 2019
-
-
Nick Terrell authored
-
- Dec 20, 2018
-
-
Nick Terrell authored
* Add configs that test multithreading, LDM, and setting explicit parameters. * Update the `compress cctx` method to accept `ZSTD_parameters`. * Compile against the multithreaded `libzstd.a`. * Update `results.csv` for the new configs. Unless you think there are more configs/methods I should test, I think we have a fairly wide set of configs/methods, so I'll pause adding more for now.
-
- Dec 11, 2018
-
-
Nick Terrell authored
-
Nick Terrell authored
-
- Dec 01, 2018
-
-
Nick Terrell authored
Dictionaries are prebuilt and saved as part of the data object. The config decides whether or not to use the dictionary if it is available. Configs that require dictionaries are only run with data that have dictionaries. The method will skip configs that are irrelevant, so for example ZSTD_compress() will skip configs with dictionaries. I've also trimmed the silesia source to 1MB per file (12 MB total), and added 500 samples from the github data set with a dictionary. I've intentionally added an extra line to the `results.csv` to make the nightly build fail, so that we can see how CircleCI reports it. Full list of changes: * Add pre-built dictionaries to the data. * Add `use_dictionary` and `no_pledged_src_size` flags to the config. * Add a config using a dictionary for every level. * Add a config that specifies no pledged source size. * Support dictionaries and streaming in the `zstdcli` method. * Add a context-reuse method using `ZSTD_compressCCtx()`. * Clean up the formatting of the `results.csv` file to align columns. * Add `--data`, `--config`, and `--method` flags to constrain each to a particular value. This is useful for debugging a failure or debugging a particular config/method/data.
-
- Nov 29, 2018
-
-
Nick Terrell authored
The regression tests run nightly or on the `regression` branch for convenience. The results get uploaded as the artifacts of the job. If they change, check the diff printed in the job. If all is well, download the new results and commit them to the repo. This code will only run on a UNIX like platform. It could be made to run on Windows, but I don't think that it is necessary. It also uses C99. * data: This module defines the data to run tests on. It downloads data from a URL into a cache directory, checks it against a checksum, and unpacks it. It also provides helpers for accessing the data. * config: This module defines the configs to run tests with. A config is a set of API parameters and a set of CLI flags. * result: This module is a helper for method that defines the result type. * method: This module defines the compression methods to test. It is what runs the regression test using the data and the config. It reports the total compressed size, or an error/skip. * test: This is the test binary that runs the tests for every (data, config, method) tuple, and prints the results to the output file and stderr. * results.csv: The results that the current commit is expected to produce.
-