We endeavor to rebuild a query processing framework for big data with constrained resources and enable small and medium-sized enterprises to enjoy the real convenience of big data.
We reach for a quantum leap from traditional approximate computing and head strongly towards data-driven approximate algorithms and theories.
At present, the incremental programs specially designed for a specific problem face high barriers to entry. We are working on an effective and universal incremental method, using program language, compilers, and algorithm skills to build an incremental program.
We are engaged in a novel strategy of balancing computing resources and efficiency, thus cutting overhead costs, including the computing and communication time, producing from distributed computing.
How to optimize the query of multi-modal data and the value of fully integrating and utilizing such data are future development trends.
Data quality management is the process of automatically detecting and fixing data errors through semantic rule discovery to improve data availability.