# C++: The power of unpack

In the Fit library it has the `fit::unpack`

function adaptor that will unpack a sequence onto a function. It works with both `std::tuple`

and `fit::pack`

:

```
auto check_equal = [](auto x, auto y) { return x == y; }
assert(fit::unpack(check_equal)(std::make_tuple(1, 1)));
assert(fit::unpack(check_equal)(fit::pack(1, 1)));
```

It is also extensible, so by specializing `fit::unpack_sequence`

it can unpack other sequences as well. This post won’t focus on the extensibility of `fit::unpack`

, but rather what can be done with `fit::unpack`

when combined with other components in Fit.

## Basic algorithms

We can implement simple algorithms such as `transform`

(also know as a functor) by using the `fit::by`

adaptor. The adaptor applies a projection over each of the parameters of the function. For the `transform`

algorithm, the function is just the construction of `std::tuple`

with the projection being applied to each parameter(or element) of the tuple:

```
FIT_STATIC_LAMBDA_FUNCTION(transform) = [](auto&& sequence, auto f)
{
return fit::unpack(fit::by(f, fit::construct<std::tuple>()))(std::forward<decltype(sequence)>(sequence));
};
```

Now, if we leave off the `fit::construct<std::tuple>()`

then `fit::by`

will still call the projection on each parameter, it just won’t pass the result to another function. This is known as `for_each`

:

```
FIT_STATIC_LAMBDA_FUNCTION(for_each) = [](auto&& sequence, auto f)
{
return fit::unpack(fit::by(f))(std::forward<decltype(sequence)>(sequence));
};
```

So algorithms can be written by combining adaptors with `fit::unpack`

. Another example is `fold`

which can be implemented using `fit::compress`

:

```
FIT_STATIC_LAMBDA_FUNCTION(fold) = [](auto&& sequence, auto f)
{
return fit::unpack(fit::compress(f))(std::forward<decltype(sequence)>(sequence));
};
```

## Filter

Well, `filter`

needs a little more work as the Fit library doesn’t provide a `filter_adaptor`

. We can implement it by using a combination of `join`

and `transform`

(also know as a monad). First, we transform it to a sequence holding one element if it matches the predicate, otherwise, it will return an empty sequence. Next, we do a `join`

which will flatten all these sequences together into one sequence. The empty sequences fall off, leaving just the elements that match the predicate.

So first, lets look how to implement `join`

. This is actually very easy, since `fit::unpack`

can unpack multiple sequences. All the elements from each sequence will be called on the function in order:

```
auto check_equal = [](auto x, auto y) { return x == y; }
assert(fit::unpack(check_equal)(std::make_tuple(1), std::make_tuple(1))));
```

So `fit::unpack(fit::construct<std::tuple>())`

is equivalent to `tuple_cat`

, however, for `join`

it takes a sequence of sequences, so its just a matter of doing a double unpack:

```
FIT_STATIC_FUNCTION(join) = fit::unpack(fit::unpack(fit::construct<std::tuple>()));
```

So now we compose this with `transform`

to implement `filter`

. Now we can’t check the predicate using a runtime `if`

, we actually need a `static_if`

. C++ doesn’t provide one, but we can use `fit::if_`

with `fit::conditional`

to check the predicate:

```
FIT_STATIC_LAMBDA_FUNCTION(filter) = [](auto&& sequence, auto predicate)
{
return fit::compose(join, transform)(
std::forward<decltype(sequence)>(sequence),
[&](auto&& x)
{
return fit::conditional(
fit::if_(predicate(std::forward<decltype(x)>(x)))(fit::pack),
fit::always(fit::pack())
)(std::forward<decltype(x)>(x));
}
);
};
```

Of course, all predicates must be dependently-typed, which is very natural to use:

```
auto t = std::make_tuple(1, 2, 'x', 3);
auto r = filter(t, [](auto x) { return std::is_same<int, decltype(x)>(); });
assert(r == std::make_tuple(1, 2, 3));
```

## Zipping sequences

The `zip`

function combines two sequences so they can be unpacked together. Now, the Fit library does provides `fit::combine`

that combines each function with a corresponding parameter. It is essentially zipping them together. However, we first need to transform the first sequence to a sequence of functions:

```
FIT_STATIC_LAMBDA_FUNCTION(zip_with) = [](auto&& sequence1, auto&& sequence2, auto f)
{
auto&& functions = transform(
std::forward<decltype(sequence1)>(sequence1),
[&](auto&& x)
{
return [&](auto&& y)
{
return f(std::forward<decltype(x)>(x), std::forward<decltype(y)>(y));
};
}
);
...
};
```

Next, we can unpack that sequence with `fit::combine`

, however, the first function to `combine`

will be the construction of `std::tuple`

:

```
FIT_STATIC_LAMBDA_FUNCTION(zip_with) = [](auto&& sequence1, auto&& sequence2, auto f)
{
auto&& functions = transform(
std::forward<decltype(sequence1)>(sequence1),
[&](auto&& x)
{
return [&](auto&& y)
{
return f(std::forward<decltype(x)>(x), std::forward<decltype(y)>(y));
};
}
);
auto combined = fit::unpack(fit::capture(fit::construct<std::tuple>())(fit::combine))(functions);
...
};
```

Next we unpack `combined`

with the second sequence:

```
FIT_STATIC_LAMBDA_FUNCTION(zip_with) = [](auto&& sequence1, auto&& sequence2, auto f)
{
auto&& functions = transform(
std::forward<decltype(sequence1)>(sequence1),
[&](auto&& x)
{
return [&](auto&& y)
{
return f(std::forward<decltype(x)>(x), std::forward<decltype(y)>(y));
};
}
);
auto combined = fit::unpack(fit::capture(fit::construct<std::tuple>())(fit::combine))(functions);
return fit::unpack(combined)(std::forward<decltype(sequence2)>(sequence2));
};
```

So now we can use it to, for example, compute the dot product of two tuples:

```
FIT_STATIC_LAMBDA_FUNCTION(dot) = [](auto&& x, auto&& y)
{
auto product = zip_with(x, y, [](auto x, auto y) { return x*y; });
return fold(product, [](auto x, auto y) { return x+y; });
};
```

## Conclusion

We can actually accomplish a lot using `fit::unpack`

. We just wrote a simple extensible heterogeneous algorithms library in under 100 lines of code, and without using template template parameters.

Of course, this doesn’t handle laziness, but for most cases, this is not necessary. If it is necessary, than some form of thunks could be used for computation instead.