Interactive Conv Logic Autoencoder

Click any channel to see its learned logic circuit

Research on parameterizing neural networks with logic gates is very exciting. Petersen relaxation provides a path to use gradient descent to optimize such networks. I was curious about using this approach to create bottlenecked autoencoders for data compression. I tested this out on the STL-10 image dataset.

The goal is to train a network to encode and decode images. Since the network is parameterized by logic gates, the result of training is the design for a circuit. I downsized the images to 16x16. I represent each pixel using 2 binary values. The bottleneck dimension is half of the input dimension.

The reconstructed images are not amazing and the compression ratio is not that good but it is interesting that this sort of works.

ENCODER
Block 0: 4→16 channels (16×16 → 8×8 after pool)
Block 1: 16→32 channels (8×8 → 4×4 after pool) [BOTTLENECK]
DECODER
Block 0: 32→16 channels (4×4 → 8×8 after upsample)
Block 1: 16→4 channels (8×8 → 16×16 after upsample) [OUTPUT]
Click a channel to view its circuit