Projects
Boolean Function Synthesis using Gated Continuous Logic Network
I am currently working on Boolean Function Synthesis with Stanly Samuel under the advisory of Prof. Aditya Kanade, Prof. Chiranjib Bhattacharyya and Prof. Deepak D’Souza.
- The focus of this project is to understand how well a neural network can capture the semantics of a Boolean formula,for synthesis.
- We use a modified GCLN i.e,Gated Continuous Logic Networkas our model architecture; usedfor the first timein the context of Boolean Function Synthesis.
- Ideas used: 1. Fractional Sampling, 2. Learning using GCLN, 3. Validity Checking.
- Preliminary results show promise.
Challenges faced:
- Small dataset.
- If sampled only boolean values for each variables then the dataset would be very limited.
- Solved it by emplyoing fractional sampling.
- Conversion from verilog to python3.
- Verilog is a declarative language while Python is an imperativie language.
- This created dependency issues in the python file.
- Built DAG and performed topological sort to resolve the dependencies.
Generating Grammar Rules for Syntax-Guided Synthesis using Deep Neural Network:
- This project aimed at restricting the solution space of SyGuS problem by predicting set of relevant grammar rules for deriving the final program.
- Model architecture used is gated graph neural network to feed forward neural network.
- Achieved 91% accuracy over validation data. GitHub
Report Data_Generation_Flow_Diagram
Synthesizing Programs from Logical Constraints using Neural Network:
In this project our aim was to discover whether the Neural Networks can understand the Semantics of Logical Constraints or not. If it captures the semantics then, is it able to synthesize meaningful programs from the constraints or not. We experimented with the Conditional Linear Integer Arithmetic track from the SyGuS competition 2019. In a Multi-Modal setting, we used a GGNN and a GRU for encoding the constraints and an GRU for decoding final programs. This project was done during the course Program Synthesis meets Machine Learning (Jan’20, Jun’20). I worked in this project along with Stanly Samuel under the guidance of Prof. Deepak D’Souza, Prof. Chiranjib Bhattacharyya, and Dr. Sriram Rajamani (Microsoft). GitHub
Learning in Sparse Reward Environment:
Studied what is sparse reward and how the agent learns to achieve the desired goal even with binary rewards. In this project we implemented the Hindsight Experience Replay. We also studied with the help of experimentation, the e ect of Demonstrations. Finally, we even implemented HER for Dynamic environments. This Project done during the course Machine Learning (Jan’20, Jun’20). I worked in this project along with Mariamma Antony(PhD), Vivek Khandelwal(Masters), and Jagriti Singh(PhD) under the guidance of Shubham Gupta.