中文 English

Flex Logix

Flex Logix has finished the hardware design and is fabricating it’s first Inference Accelerator CoProcessor, InferX X1, which is based on our nnMAX Inference IP. We will have chips and PCIe boards this summer. Our software team is preparing our Inference Model Compiler to be ready to run deep neural network models on X1. We have begun architecting the follow-on chip. InferX has industry-best inference efficiency: more inference throughput per $ and per watt. We excel on larger models and megapixel images, but can run any neural network.

For more info: https://flex-logix.com/wp-content/uploads/2020/05/2020-05-Inference-Software-Developer-Experienced.pdf

To apply: hr@flex-logix.com