CsCheck 3.0.0-rc3

This is a prerelease version of CsCheck.
There is a newer version of this package available.
See the version list below for details.
dotnet add package CsCheck --version 3.0.0-rc3                
NuGet\Install-Package CsCheck -Version 3.0.0-rc3                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="CsCheck" Version="3.0.0-rc3" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add CsCheck --version 3.0.0-rc3                
#r "nuget: CsCheck, 3.0.0-rc3"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install CsCheck as a Cake Addin
#addin nuget:?package=CsCheck&version=3.0.0-rc3&prerelease

// Install CsCheck as a Cake Tool
#tool nuget:?package=CsCheck&version=3.0.0-rc3&prerelease                

CsCheck

build nuget

CsCheck is a C# random testing library inspired by QuickCheck.

It differs in that generation and shrinking are both based on PCG, a fast random number generator.

This gives the following advantages:

  • Automatic shrinking. Gen classes are composable with no need for Arb classes. So less boilerplate.
  • Random testing and shrinking are parallelized. This and PCG make it very fast.
  • Shrunk cases have a seed value. Simpler examples can easily be reproduced.
  • Shrinking can be continued later to give simpler cases for high dimensional problems.
  • Concurrency testing and random shrinking work well together. Repeat is not needed.

See why you should use it, the comparison with other random testing libraries, or how CsCheck does in the shrinking challenge. In one shrinking challenge test CsCheck managed to shrink to a new smaller example than was thought possible and is not reached by any other testing library.

CsCheck also has functionality to make multiple types of testing simple and fast:

The following tests are in xUnit but could equally be used in any testing framework.

More to see in the Tests. There are also 1,000+ F# tests using CsCheck in MKL.NET.

No Reflection was used in the making of this product.

Generator Creation Example

public abstract record ValueOrRange();
public sealed record ValueOrRange_Value(double Value) : ValueOrRange;
public sealed record ValueOrRange_Range(double Lower, double? Upper) : ValueOrRange;
public enum ParameterType { Price, Spread, Yield, Discount }

// Generators
static readonly Gen<ValueOrRange> genValueOrRange =
   Gen.OneOf<ValueOrRange>(
     Gen.Double.Select(i => new ValueOrRange_Value(i)),
     Gen.Double.SelectMany(l => Gen.Double[l, l + 100].Nullable().Select(u => new ValueOrRange_Range(l, u)))
   );
static readonly Gen<Dictionary<DateTime, List<(ParameterType Type, ValueOrRange Value)>>> genExample =
   Gen.Dictionary(Gen.DateTime, Gen.Select(Gen.Enum<ParameterType>(), genValueOrRange).List);

Random testing

Unit Single

[Fact]
public void Single_Unit_Range()
{
    Gen.Single.Unit.Sample(f => Assert.InRange(f, 0f, 0.9999999f));
}

Long Range

[Fact]
public void Long_Range()
{
    (from t in Gen.Select(Gen.Long, Gen.Long)
     let start = Math.Min(t.V0, t.V1)
     let finish = Math.Max(t.V0, t.V1)
     from value in Gen.Long[start, finish]
     select (value, start, finish))
    .Sample(i => Assert.InRange(i.value, i.start, i.finish));
}

Int Distribution

[Fact]
public void Int_Distribution()
{
    int buckets = 70;
    int frequency = 10;
    int[] expected = Enumerable.Repeat(frequency, buckets).ToArray();
    Gen.Int[0, buckets - 1].Array[frequency * buckets]
    .Select(sample => Tally(buckets, sample))
    .Sample(actual => Check.ChiSquared(expected, actual));
}

Serialization Roundtrip

static void TestRoundtrip<T>(Gen<T> gen, Action<Stream, T> serialize, Func<Stream, T> deserialize)
{
    gen.Sample(t =>
    {
        using var ms = new MemoryStream();
        serialize(ms, t);
        ms.Position = 0;
        return deserialize(ms).Equals(t);
    });
}
[Fact]
public void Varint()
{
    TestRoundtrip(Gen.UInt, StreamSerializer.WriteVarint, StreamSerializer.ReadVarint);
}
[Fact]
public void Double()
{
    TestRoundtrip(Gen.Double, StreamSerializer.WriteDouble, StreamSerializer.ReadDouble);
}
[Fact]
public void DateTime()
{
    TestRoundtrip(Gen.DateTime, StreamSerializer.WriteDateTime, StreamSerializer.ReadDateTime);
}

Shrinking Challenge

[Fact]
public void No2_LargeUnionList()
{
    Gen.Int.Array.Array
    .Sample(aa =>
    {
        var hs = new HashSet<int>();
        foreach (var a in aa)
        {
            foreach (var i in a) hs.Add(i);
            if (hs.Count >= 5) return false;
        }
        return true;
    });
}

Recursive

record MyObj(int Id, MyObj[] Children);

[Fact]
public void RecursiveDepth()
{
    int maxDepth = 4;
    Gen.Recursive<MyObj>((i, my) =>
        Gen.Select(Gen.Int, my.Array[0, i < maxDepth ? 6 : 0], (i, a) => new MyObj(i, a))
    )
    .Sample(i =>
    {
        static int Depth(MyObj o) => o.Children.Length == 0 ? 0 : 1 + o.Children.Max(Depth);
        return Depth(i) <= maxDepth;
    });
}

Classify

[Fact]
public void AllocatorMany_Classify()
{
    Gen.Select(Gen.Int[3, 30], Gen.Int[3, 15]).SelectMany((rows, cols) =>
        Gen.Select(
            Gen.Int[0, 5].Array[cols].Where(a => a.Sum() > 0).Array[rows],
            Gen.Int[900, 1000].Array[rows],
            Gen.Int.Uniform))
    .Sample((solution,
             rowPrice,
             seed) =>
    {
        var rowTotal = Array.ConvertAll(solution, row => row.Sum());
        var colTotal = Enumerable.Range(0, solution[0].Length).Select(col => solution.SumCol(col)).ToArray();
        var allocation = AllocatorMany.Allocate(rowPrice, rowTotal, colTotal, new(seed), time: 60);
        if (!TotalsCorrectly(rowTotal, colTotal, allocation.Solution))
            throw new Exception("Does not total correctly");
        return $"{(allocation.KnownGlobal ? "Global" : "Local")}/{allocation.SolutionType}";
    }, time: 900, writeLine: output.WriteLine);
}
Count % Median Lower Q Upper Q Minimum Maximum
Global 458 50.22%
  RoundingMinimum 343 37.61% 2.68ms 0.50ms 10.85ms 0.03ms 190.92ms
  EveryCombination 87 9.54% 173.99ms 16.80ms 1,199.64ms 0.20ms 42,257.35ms
  RandomChange 28 3.07% 59,592.98ms 55,267.94ms 59,901.58ms 38,575.41ms 60,107.64ms
Local 454 49.78%
  RoundingMinimum 301 33.00% 60,000.12ms 60,000.04ms 60,003.70ms 60,000.02ms 60,144.84ms
  RandomChange 90 9.87% 60,000.06ms 60,000.03ms 60,004.41ms 60,000.02ms 60,136.59ms
  EveryCombination 63 6.91% 60,000.10ms 60,000.03ms 60,001.29ms 60,000.01ms 60,019.36ms

Model-based testing

Model-based is the most efficient form of random testing. Only a small amount of code is needed to fully test functionality. SampleModelBased generates an initial actual and model and then applies a random sequence of operations to both checking that the actual and model are still equal.

SetSlim Add

[Fact]
public void SetSlim_ModelBased()
{
    Gen.Int.Array.Select(a => (new SetSlim<int>(a), new HashSet<int>(a)))
    .SampleModelBased(
        Gen.Int.Operation<SetSlim<int>, HashSet<int>>((ls, l, i) =>
        {
            ls.Add(i);
            l.Add(i);
        })
        // ... other operations
    );
}

Metamorphic testing

The second most efficient form of random testing is metamorphic which means doing something two different ways and checking they produce the same result. SampleMetamorphic generates two identical initial samples and then applies the two functions and asserts the results are equal. This can be needed when no model can be found that is not just a reimplementation.

More about how useful metamorphic tests can be here: How to specify it!.

MapSlim Update

[Fact]
public void MapSlim_Metamorphic()
{
    Gen.Dictionary(Gen.Int, Gen.Byte)
    .Select(d => new MapSlim<int, byte>(d))
    .SampleMetamorphic(
        Gen.Select(Gen.Int[0, 100], Gen.Byte, Gen.Int[0, 100], Gen.Byte).Metamorphic<MapSlim<int, byte>>(
            (d, t) => { d[t.V0] = t.V1; d[t.V2] = t.V3; },
            (d, t) => { if (t.V0 == t.V2) d[t.V2] = t.V3; else { d[t.V2] = t.V3; d[t.V0] = t.V1; } }
        )
    );
}

Concurrency testing

CsCheck has support for concurrency testing with full shrinking capability. A concurrent sequence of operations are run on an initial state and the result is compared to all the possible linearized versions. At least one of these must be equal to the concurrent version.

Idea from John Hughes talk and paper. This is actually easier to implement with CsCheck than QuickCheck because the random shrinking does not need to repeat each step as QuickCheck does (10 times by default) to make shrinking deterministic.

SetSlim

[Fact]
public void SetSlim_Concurrency()
{
    Gen.Byte.Array.Select(a => new SetSlim<byte>(a))
    .SampleConcurrent(
        Gen.Byte.Operation<SetSlim<byte>>((l, i) => { lock (l) l.Add(i); }),
        Gen.Int.NonNegative.Operation<SetSlim<byte>>((l, i) => { if (i < l.Count) { var _ = l[i]; } }),
        Gen.Byte.Operation<SetSlim<byte>>((l, i) => { var _ = l.IndexOf(i); }),
        Gen.Operation<SetSlim<byte>>(l => l.ToArray())
    );
}

Causal profiling

Causal profiling is a technique to investigate the effect of speeding up one or more concurrent regions of code. It shows which regions are the bottleneck and what overall performance gain could be achieved from each region.

Idea from Emery Berger. My blog posts on this here.

Fasta

[Fact]
public void Fasta()
{
    Causal.Profile(() => FastaUtils.Fasta.NotMain(10_000_000, null)).Output(writeLine);
}

static int[] Rnds(int i, int j, ref int seed)
{
    var region = Causal.RegionStart("rnds");
    var a = intPool.Rent(BlockSize1);
    var s = a.AsSpan(0, i);
    s[0] = j;
    for (i = 1, j = Width; i < s.Length; i++)
    {
        if (j-- == 0)
        {
            j = Width;
            s[i] = IM * 3 / 2;
        }
        else
        {
            s[i] = seed = (seed * IA + IC) % IM;
        }
    }
    Causal.RegionEnd(region);
    return a;
}

Regression testing

Portfolio Calculation

Single is used to find, pin and continue to check a suitable generated example e.g. to cover a certain codepath.
Hash is used to find and check a hash for a number of results. It saves a cache of the results on a successful hash check and each subsequent run will fail with actual vs expected at the first point of any difference.
Together Single and Hash eliminate the need to commit data files in regression testing while also giving detailed information of any change.

[Fact]
public void Portfolio_Small_Mixed_Example()
{
    var portfolio = ModelGen.Portfolio.Single(p =>
           p.Positions.Count == 5
        && p.Positions.Any(p => p.Instrument is Bond)
        && p.Positions.Any(p => p.Instrument is Equity)
    , "0N0XIzNsQ0O2");
    var currencies = portfolio.Positions.Select(p => p.Instrument.Currency).Distinct().ToArray();
    var fxRates = ModelGen.Price.Array[currencies.Length].Single(a =>
        a.All(p => pp is > 0.75 and < 1.5)
    , "ftXKwKhS6ec4");
    double fxRate(Currency c) => fxRates[Array.IndexOf(currencies, c)];
    Check.Hash(h =>
    {
        h.Add(portfolio.Positions.Select(p => p.Profit));
        h.Add(portfolio.Profit(fxRate));
        h.Add(portfolio.RiskByPosition(fxRate));
    }, 5857230471108592669, decimalPlaces: 2);
}

Performance testing

Linq Sum

[Fact]
public void Faster_Linq_Random()
{
    Gen.Byte.Array[100, 1000]
    .Faster(
        data => data.Aggregate(0.0, (t, b) => t + b),
        data => data.Select(i => (double)i).Sum()
    )
    .Output(writeLine);
}

The performance is raised in an exception if it fails but can also be output if it passes with the above output function.

Tests.CheckTests.Faster_Linq_Random [27ms]
Standard Output Messages:
32.2%[29.4%..36.5%] faster, sigma=50.0 (2,551 vs 17)

The first number is the estimated median performance improvement with the interquartile range in the square brackets. The counts of faster vs slower and the corresponding sigma (the number of standard deviations of the binomial distribution for the null hypothesis P(faster) = P(slower) = 0.5) are also shown. The default sigma used is 6.0.

Matrix Multiply

[Fact]
public void Faster_Matrix_Multiply_Range()
{
    var genDim = Gen.Int[5, 30];
    var genArray = Gen.Double.Unit.Array2D;
    Gen.SelectMany(genDim, genDim, genDim, (i, j, k) => Gen.Select(genArray[i, j], genArray[j, k]))
    .Faster(
        MulIKJ,
        MulIJK
    );
}

MapSlim Increment

[Fact]
public void MapSlim_Performance_Increment()
{
    Gen.Byte.Array
    .Select(a => (a, new MapSlim<byte, int>(), new Dictionary<int, int>()))
    .Faster(
        (items, mapslim, _) =>
        {
            foreach (var b in items)
                mapslim.GetValueOrNullRef(b)++;
        },
        (items, _, dict) =>
        {
            foreach (var b in items)
            {
                dict.TryGetValue(b, out int c);
                dict[b] = c + 1;
            }
        },
        repeat: 100
    ).Output(writeLine);
}
Tests.SlimCollectionsTests.MapSlim_Performance_Increment [27 s]
Standard Output Messages:
66.0%[56.4%..74.8%] faster, sigma=200.0 (72,690 vs 13,853)

Benchmarks Game

[Fact]
public void ReverseComplement_Faster()
{
    if (!File.Exists(Utils.Fasta.Filename)) Utils.Fasta.NotMain(new[] { "25000000" });

    Check.Faster(
        ReverseComplementNew.RevComp.NotMain,
        ReverseComplementOld.RevComp.NotMain,
        threads: 1, timeout: 600_000, sigma: 6
    )
    .Output(writeLine);
}
Tests.ReverseComplementTests.ReverseComplement_Faster [27s 870ms]
Standard Output Messages:
25.1%[20.5%..31.6%] faster, sigma=6.0 (36 vs 0)

Varint

Repeat is used as the functions are very quick.

[Fact]
public void Varint_Faster()
{
    Gen.Select(Gen.UInt, Gen.Const(() => new byte[8]))
    .Faster(
        (i, bytes) =>
        {
            int pos = 0;
            ArraySerializer.WriteVarint(bytes, ref pos, i);
            pos = 0;
            return ArraySerializer.ReadVarint(bytes, ref pos);
        },
        (i, bytes) =>
        {
            int pos = 0;
            ArraySerializer.WritePrefixVarint(bytes, ref pos, i);
            pos = 0;
            return ArraySerializer.ReadPrefixVarint(bytes, ref pos);
        }, sigma: 10, repeat: 200)
    .Output(writeLine);
}
Tests.ArraySerializerTests.Varint_Faster [45 ms]
Standard Output Messages:
10.9%[-3.2%..25.8%] faster, sigma=10.0 (442 vs 190)

Debug utilities

The Dbg module is a set of utilities to collect, count and output debug info, time, classify generators, define and remotely call functions, and perform in code regression during testing. CsCheck can temporarily be added as a reference to run in non test code. Note this module is only for temporary debug use and the API may change between minor versions.

Count, Info, Set, Get, CallAdd, Call

public void Normal_Code(int z)
{
    Dbg.Count();
    var d = Calc1(z).DbgSet("d");
    Dbg.Call("helpful");
    var c = Calc2(d).DbgInfo("c");
    Dbg.CallAdd("test cache", () =>
    {
        Dbg.Info(Dbg.Get("d"));
        Dbg.Info(cacheItems);
    });
}

[Fact]
public void Test()
{
    Dbg.CallAdd("helpful", () =>
    {
        var d = (double)Dbg.Get("d");
        // ...
        Dbg.Set("d", d);
    });
    Normal_Code(z);
    Dbg.Call("test cache");
    Dbg.Output(writeLine);
}

Classify

[Fact]
public void Test()
{
    Gen.Int.Array
    .DbgClassify(a => a.Length switch
    {
        0 => "zero length",
        1 => "single length",
        < 10 => "less ten",
        > 100 => "over 100",
        _ => "10 - 100",
    })
    .Sample(a =>
    {
        // ...
    });

    Dbg.Output(writeLine);
}

Regression

public double[] Calculation(InputData input)
{
    var part1 = CalcPart1(input);
    // Add items to the regression on first pass, throw/break here if different on subsequent.
    Dbg.Regression.Add(part1);
    var part2 = CalcPart2(part1).DbgTee(Dbg.Regression.Add); // Tee can be used to do this inline.
    // ...
    return CalcFinal(partN).DbgTee(Dbg.Regression.Add);
}

[Fact]
public void Test()
{
    // Remove any previously saved regression data.
    Dbg.Regression.Delete();

    Calculation(InputSource1());

    // End first pass save mode (only needed if second pass is in this process run).
    Dbg.Regression.Close();

    // Subsequent pass could be now or a code change and rerun (without the Delete).
    Calculation(InputSource2());

    // Check full number of items have been reconciled (optional).
    Dbg.Regression.Close();
}

Time


public Result CalcPart2(InputData input)
{
    using var time = Dbg.Time();
    // Calc
    time.Line();
    // Calc more
    time.Line();
    // ...
    return result;
}


public void LongProcess()
{
    using var time = Dbg.Time();
    var part1 = CalcPart1(input);
    time.Line();
    var part2 = new List<Result>();
    foreach(var item in part1)
        part2.Add(CalcPart2(item));
    time.Line();
    // ...
    return CalcFinal(partN);
}

[Fact]
public void Test()
{
    LongProcess();
    Dbg.Output(writeLine);
}

Configuration

Check functions accept configuration optional parameters e.g. iter: 100_000, seed: "0N0XIzNsQ0O2", print: t ⇒ string.Join(", ", t):

iter - The number of iterations to run in the sample (default 100).
time - The number of seconds to run the sample.
seed - The seed to use for the first iteration.
threads - The number of threads to run the sample on (default number logical CPUs).
timeout - The timeout in seconds to use for Faster (default 60 seconds).
print - A function to convert the state to a string for error reporting (default Check.Print).
equal - A function to check if the two states are the same (default Check.Equal).
sigma - For Faster sigma is the number of standard deviations from the null hypothesis (default 6).
replay - The number of times to retry the seed to reproduce a SampleConcurrent fail (default 100).

Global defaults can also be set via environment variables:

$env:CsCheck_Iter = 10000; dotnet test -c Release --filter Multithreading; rm env:CsCheck*

$env:CsCheck_Time = 60; dotnet test -c Release --filter Multithreading; rm env:CsCheck*

$env:CsCheck_Seed = '0N0XIzNsQ0O2'; dotnet test -c Release --filter List; rm env:CsCheck*

$env:CsCheck_Sigma = 50; dotnet test -c Release -l 'console;verbosity=detailed' --filter Faster; rm env:CsCheck*

$env:CsCheck_Threads = 1; dotnet test -c Release -l 'console;verbosity=detailed' --filter Perf; rm env:CsCheck*

Development

Contributions are very welcome!

CsCheck was designed to be easily extended. If you have created a cool Gen or extension please consider a PR.

Apache 2 and free forever.

Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • net6.0

    • No dependencies.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories (5)

Showing the top 5 popular GitHub repositories that depend on CsCheck:

Repository Stars
dotnet/orleans
Cloud Native application framework for .NET
Azure/apiops
APIOps applies the concepts of GitOps and DevOps to API deployment. By using practices from these two methodologies, APIOps can enable everyone involved in the lifecycle of API design, development, and deployment with self-service and automated tools to ensure the quality of the specifications and APIs that they’re building.
dadhi/ImTools
Fast and memory-efficient immutable collections and helper data structures
ReubenBond/Hagar
Fast, flexible, and version-tolerant serializer for .NET
MKL-NET/MKL.NET
A simple cross platform .NET API for Intel MKL
Version Downloads Last updated
4.2.0 192 12/8/2024
4.1.0 1,462 11/9/2024
4.0.0 5,170 8/28/2024
3.2.2 7,603 5/11/2024
3.2.1 119 5/11/2024
3.2.0 350 4/27/2024
3.1.0 1,031 12/9/2023
3.0.0 344 11/12/2023
3.0.0-rc4 166 11/6/2023
3.0.0-rc3 148 11/2/2023
3.0.0-rc2 157 10/29/2023
3.0.0-rc1 112 10/23/2023
2.14.1 3,134 9/27/2023
2.14.0 235 9/24/2023
2.13.1 171 9/18/2023
2.13.0 144 9/17/2023
2.12.0 3,693 8/12/2023
2.11.0 361 6/12/2023
2.10.1 2,167 11/30/2022
2.10.0 53,051 7/12/2022
2.9.2 575 7/8/2022
2.9.1 501 6/28/2022
2.9.0 478 6/24/2022
2.8.1 3,495 4/23/2022
2.8.0 649 3/20/2022
2.7.1 481 3/13/2022
2.7.0 515 2/27/2022
2.6.0 510 2/26/2022
2.5.0 484 2/12/2022
2.4.2 17,204 1/22/2022
2.4.1 764 10/29/2021
2.4.0 485 10/16/2021
2.3.2 3,856 6/30/2021
2.3.1 1,258 5/28/2021
2.3.0 402 5/21/2021
2.2.4 380 5/4/2021
2.2.3 349 5/4/2021
2.2.2 362 5/3/2021
2.2.1 398 5/2/2021
2.2.0 363 4/30/2021
2.1.0 386 4/22/2021
2.0.0 400 4/2/2021
2.0.0-rc2 246 3/28/2021
2.0.0-rc1 225 3/18/2021
1.1.3 9,260 2/27/2021
1.1.2 770 12/17/2020
1.1.1 486 11/26/2020
1.1.0 708 11/20/2020
1.0.0 614 10/21/2020
0.9.8 623 10/1/2020
0.9.6 581 9/29/2020
0.9.5 576 9/25/2020
0.9.4 538 9/17/2020
0.9.3 522 9/14/2020
0.9.2 523 9/12/2020
0.9.1 441 8/24/2020
0.9.0 469 8/11/2020
0.8.2 512 7/17/2020
0.8.1 538 7/16/2020
0.8.0 470 7/16/2020
0.7.0 504 7/15/2020
0.6.1 513 7/14/2020
0.6.0 520 7/13/2020
0.5.0 611 7/11/2020
0.4.0 613 7/10/2020
0.3.0-preview 374 7/9/2020
0.2.3-preview 404 7/8/2020
0.2.2-preview 317 7/7/2020
0.2.1-preview 327 7/7/2020
0.2.0-preview 357 7/6/2020
0.1.0-preview 315 7/2/2020

Improved generation for double, float and decimal. Mix of integer, fraction, exponential and evenly distributed.
Improved Print for double, float and decimal. Displays small rounded fraction and exponential representation.
Improved min size passing and shrinking.
Large performance improvements for Sample and Faster.
Added optimised Timer functionality and funcless IInvoke.
Added Gen<T>.Single() that generates a single random value.

BREAKING CHANGES:
- Example(predicate) renamed to Single(predicate).
- Removed SampleOne, use Sample(..., iter: 1, time: -2) instead.
- Removed Cast, use Select instead (better performance).
- Removed Create, create a : Gen<T> class instead.
- Removed SelectTuple.
- Removed Gen.Enumerable.
- Removed Gen.Double.Positive, Gen.Double.Normal etc
- Added max count to Where.
- Now targets net6.0, 2.14.1 was the last to target netstandard2.0