Spaces:
Running
Running
Sync with VCLab website + arXiv index trigger
Browse filesRebuild org README with website branding (logo, brand color, six directions, PQ816, OPPO), and embed all arXiv URLs to trigger HF Papers indexing.
README.md
CHANGED
|
@@ -9,17 +9,25 @@ pinned: false
|
|
| 9 |
|
| 10 |
<div align="center">
|
| 11 |
|
| 12 |
-
<img src="https://raw.githubusercontent.com/PolyU-VCLab/PolyU-VCLab.github.io/main/assets/icons/vclab-logo.svg" width="
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
</div>
|
| 25 |
|
|
@@ -27,46 +35,54 @@ pinned: false
|
|
| 27 |
|
| 28 |
## About the Lab
|
| 29 |
|
| 30 |
-
The **Visual Computing Lab (VCLab)**
|
| 31 |
|
| 32 |
-
Our work regularly appears at **CVPR, ICCV, ECCV, NeurIPS, ICLR, TPAMI and IJCV**
|
| 33 |
|
| 34 |
**Office:** PQ816, Department of Computing, PolyU, Hung Hom, Kowloon, Hong Kong
|
| 35 |
**Contact:** `cslzhang [at] comp.polyu.edu.hk`
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
-
##
|
| 40 |
|
| 41 |
-
Our
|
| 42 |
|
| 43 |
-
| # |
|
| 44 |
| :-: | :-- | :-- |
|
| 45 |
-
| 01 | **Image / Video Restoration, Enhancement & Quality Assessment** <br/><sub>Real-world super-resolution, denoising, deblurring, HDR, video restoration, perceptual IQA.</sub> | [Open
|
| 46 |
-
| 02 | **Multimodal Perception, Understanding & Reasoning** <br/><sub>MLLM-driven visual perception, grounding and reasoning.</sub> | [Open
|
| 47 |
-
| 03 | **Image & Video Synthesis and Generation** <br/><sub>Accelerating, distilling and improving diffusion / AR / DiT generative models.</sub> | [Open
|
| 48 |
-
| 04 | **3D Perception, Reconstruction & Generation** <br/><sub>Sensing, reconstructing, synthesizing and editing high-fidelity 3D worlds.</sub> | [Open
|
| 49 |
-
| 05 | **Architecture & Training Paradigms** <br/><sub>New architectures for ViT / LLM / VLM, and efficient, decentralized training.</sub> | [Open
|
| 50 |
-
| 06 | **Benchmarks & Datasets** <br/><sub>Evaluation benchmarks and training datasets for the visual computing community.</sub> | [Open
|
| 51 |
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
---
|
| 55 |
|
| 56 |
## Join Us
|
| 57 |
|
| 58 |
-
We are recruiting **PhD students** (jointly trained with OPPO Research Institute), **postdocs**, and **research interns** across all six research directions.
|
| 59 |
|
| 60 |
-
[
|
| 61 |
|
| 62 |
---
|
| 63 |
|
| 64 |
<div align="center">
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
[
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
</div>
|
|
|
|
| 9 |
|
| 10 |
<div align="center">
|
| 11 |
|
| 12 |
+
<img src="https://raw.githubusercontent.com/PolyU-VCLab/PolyU-VCLab.github.io/main/assets/icons/vclab-logo.svg" width="130" alt="VCLab Logo"/>
|
| 13 |
|
| 14 |
+
<h1>PolyU VCLab -- Visual Computing Lab</h1>
|
| 15 |
|
| 16 |
+
<p>
|
| 17 |
+
<b>Visual Computing Lab</b> at <b>The Hong Kong Polytechnic University</b><br/>
|
| 18 |
+
Directed by <a href="https://www4.comp.polyu.edu.hk/~cslzhang/"><b>Chair Professor Lei Zhang</b></a> (IEEE Fellow) | In collaboration with <b>OPPO Research Institute</b>
|
| 19 |
+
</p>
|
| 20 |
|
| 21 |
+
<p>
|
| 22 |
+
<i>"Y learning and beyond -- for future visual enhancement and understanding."</i>
|
| 23 |
+
</p>
|
| 24 |
|
| 25 |
+
<p>
|
| 26 |
+
<a href="https://polyu-vclab.github.io/"><img src="https://img.shields.io/badge/Lab%20Website-polyu--vclab.github.io-8F1329?style=for-the-badge&logo=githubpages&logoColor=white" alt="Website"></a>
|
| 27 |
+
<a href="https://www4.comp.polyu.edu.hk/~cslzhang/"><img src="https://img.shields.io/badge/Director-Prof.%20Lei%20Zhang-8F1329?style=for-the-badge" alt="Director"></a>
|
| 28 |
+
<a href="https://github.com/PolyU-VCLab"><img src="https://img.shields.io/badge/GitHub-PolyU--VCLab-181717?style=for-the-badge&logo=github" alt="GitHub"></a>
|
| 29 |
+
<a href="https://scholar.google.com/citations?user=tAK5l1IAAAAJ"><img src="https://img.shields.io/badge/Google%20Scholar-Lei%20Zhang-4285F4?style=for-the-badge&logo=googlescholar&logoColor=white" alt="Scholar"></a>
|
| 30 |
+
</p>
|
| 31 |
|
| 32 |
</div>
|
| 33 |
|
|
|
|
| 35 |
|
| 36 |
## About the Lab
|
| 37 |
|
| 38 |
+
The **Visual Computing Lab (VCLab)** works across the full stack of modern visual intelligence: from low-level image and video restoration, through multimodal perception and reasoning, to generative models, 3D reconstruction, new network architectures, and rigorous evaluation benchmarks.
|
| 39 |
|
| 40 |
+
Our work regularly appears at **CVPR, ICCV, ECCV, NeurIPS, ICLR, TPAMI and IJCV**. Many of our methods have been deployed on **hundreds of millions of mobile devices** through our long-term collaboration with **OPPO Research Institute**.
|
| 41 |
|
| 42 |
**Office:** PQ816, Department of Computing, PolyU, Hung Hom, Kowloon, Hong Kong
|
| 43 |
**Contact:** `cslzhang [at] comp.polyu.edu.hk`
|
| 44 |
|
| 45 |
---
|
| 46 |
|
| 47 |
+
## Research Directions
|
| 48 |
|
| 49 |
+
Our work is organized into six Collections on this page. Each Collection is kept up-to-date with the latest papers, code, and models -- just click to explore.
|
| 50 |
|
| 51 |
+
| # | Collection | Explore |
|
| 52 |
| :-: | :-- | :-- |
|
| 53 |
+
| 01 | **Image / Video Restoration, Enhancement & Quality Assessment** <br/><sub>Real-world super-resolution, denoising, deblurring, HDR, video restoration, perceptual IQA.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/image-video-restoration-enhancement-and-quality-assessment-69e88641cae9ce5e7cd7d102) |
|
| 54 |
+
| 02 | **Multimodal Perception, Understanding & Reasoning** <br/><sub>MLLM-driven visual perception, grounding and reasoning.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/multimodal-perception-understanding-and-reasoning-69e88630a9632f74ae6235a9) |
|
| 55 |
+
| 03 | **Image & Video Synthesis and Generation** <br/><sub>Accelerating, distilling, and improving diffusion / AR / DiT generative models.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/image-and-video-synthesis-and-generation-69e88631f170be22735da0ba) |
|
| 56 |
+
| 04 | **3D Perception, Reconstruction & Generation** <br/><sub>Sensing, reconstructing, synthesizing, and editing high-fidelity 3D worlds.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/3d-perception-reconstruction-and-generation-69e8863209efbb5392f99651) |
|
| 57 |
+
| 05 | **Architecture & Training Paradigms** <br/><sub>New architectures for ViT / LLM / VLM, and efficient, decentralized training.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/architecture-and-training-paradigms-69e886412df265f56289c0e7) |
|
| 58 |
+
| 06 | **Benchmarks & Datasets** <br/><sub>Evaluation benchmarks and training datasets for the visual computing community.</sub> | [Open](https://huggingface.co/collections/VCLab-HKPU/benchmarks-and-datasets-69e8863311eb2c5cc3ac1d9c) |
|
| 59 |
|
| 60 |
+
For the **complete publication list**, please visit Prof. Zhang's [publication page](https://www4.comp.polyu.edu.hk/~cslzhang/papers.htm) and the lab's [official website](https://polyu-vclab.github.io/).
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
+
|
| 64 |
+
## Industrial Collaboration
|
| 65 |
+
|
| 66 |
+
We maintain a long-term strategic collaboration with **OPPO Research Institute**, where many of our algorithms (denoising, super-resolution, HDR, face restoration, low-light enhancement, etc.) have been productized in consumer imaging systems deployed on hundreds of millions of mobile devices.
|
| 67 |
|
| 68 |
---
|
| 69 |
|
| 70 |
## Join Us
|
| 71 |
|
| 72 |
+
We are recruiting **PhD students** (jointly trained with OPPO Research Institute), **postdocs**, and **research interns** across all six research directions.
|
| 73 |
|
| 74 |
+
Interested candidates are welcome to send a CV and a brief statement of research interests to **`cslzhang [at] comp.polyu.edu.hk`**, or visit the [Join Us](https://polyu-vclab.github.io/#join-us) page on our website.
|
| 75 |
|
| 76 |
---
|
| 77 |
|
| 78 |
<div align="center">
|
| 79 |
|
| 80 |
+
<sub>
|
| 81 |
+
|
| 82 |
+
Maintained by the VCLab team. For bug reports on this page, open an issue in the [Space repository](https://huggingface.co/spaces/VCLab-HKPU/README/discussions).
|
| 83 |
+
|
| 84 |
+
**(c) Visual Computing Lab, The Hong Kong Polytechnic University**
|
| 85 |
|
| 86 |
+
</sub>
|
| 87 |
|
| 88 |
</div>
|