Spaces:
Running
Running
Sync with VCLab website + arXiv index trigger
Browse filesRebuild org README with website branding (logo, brand color, six directions, PQ816, OPPO), and embed all arXiv URLs to trigger HF Papers indexing.
README.md
CHANGED
|
@@ -11,11 +11,11 @@ pinned: false
|
|
| 11 |
|
| 12 |
<img src="https://raw.githubusercontent.com/PolyU-VCLab/PolyU-VCLab.github.io/main/assets/icons/vclab-logo.svg" width="110" alt="VCLab Logo"/>
|
| 13 |
|
| 14 |
-
# PolyU VCLab
|
| 15 |
|
| 16 |
### Advancing Visual Intelligence for a Better World.
|
| 17 |
|
| 18 |
-
*"Y learning and beyond
|
| 19 |
|
| 20 |
[](https://polyu-vclab.github.io/)
|
| 21 |
[](https://www4.comp.polyu.edu.hk/~cslzhang/)
|
|
@@ -27,29 +27,29 @@ pinned: false
|
|
| 27 |
|
| 28 |
## About the Lab
|
| 29 |
|
| 30 |
-
The **Visual Computing Lab (VCLab)** at **The Hong Kong Polytechnic University**, directed by **Chair Professor Lei Zhang** (IEEE Fellow), works across the full stack of modern visual intelligence
|
| 31 |
|
| 32 |
Our work regularly appears at **CVPR, ICCV, ECCV, NeurIPS, ICLR, TPAMI and IJCV**, and many of our methods have been deployed on **hundreds of millions of mobile devices** through our long-term collaboration with **OPPO Research Institute**.
|
| 33 |
|
| 34 |
-
**Office:** PQ816, Department of Computing, PolyU
|
| 35 |
**Contact:** `cslzhang [at] comp.polyu.edu.hk`
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
## Six Research Directions
|
| 40 |
|
| 41 |
-
Our publications are organized into six Collections below.
|
| 42 |
|
| 43 |
| # | Direction | Open |
|
| 44 |
| :-: | :-- | :-- |
|
| 45 |
-
| 01 | **Image / Video Restoration, Enhancement & Quality Assessment** <br/><sub>Real-world super-resolution, denoising, deblurring, HDR, video restoration, perceptual IQA.</sub> | [
|
| 46 |
-
| 02 | **Multimodal Perception, Understanding & Reasoning** <br/><sub>MLLM-driven visual perception, grounding and reasoning.</sub> | [
|
| 47 |
-
| 03 | **Image & Video Synthesis and Generation** <br/><sub>Accelerating, distilling and improving diffusion / AR / DiT generative models.</sub> | [
|
| 48 |
-
| 04 | **3D Perception, Reconstruction & Generation** <br/><sub>Sensing, reconstructing, synthesizing and editing high-fidelity 3D worlds.</sub> | [
|
| 49 |
-
| 05 | **Architecture & Training Paradigms** <br/><sub>New architectures for ViT / LLM / VLM, and efficient, decentralized training.</sub> | [
|
| 50 |
-
| 06 | **Benchmarks & Datasets** <br/><sub>Evaluation benchmarks and training datasets for the visual computing community.</sub> | [
|
| 51 |
|
| 52 |
-
> For the **complete publication list**,
|
| 53 |
|
| 54 |
---
|
| 55 |
|
|
@@ -57,16 +57,16 @@ Our publications are organized into six Collections below. **Each Collection is
|
|
| 57 |
|
| 58 |
We are recruiting **PhD students** (jointly trained with OPPO Research Institute), **postdocs**, and **research interns** across all six research directions. Send your CV and a brief statement of interest to **cslzhang [at] comp.polyu.edu.hk**.
|
| 59 |
|
| 60 |
-
[
|
| 61 |
|
| 62 |
---
|
| 63 |
|
| 64 |
<div align="center">
|
| 65 |
|
| 66 |
-
[](https://github.com/PolyU-VCLab)
|
| 68 |
[](https://scholar.google.com/citations?user=tAK5l1IAAAAJ)
|
| 69 |
|
| 70 |
-
**
|
| 71 |
|
| 72 |
</div>
|
|
|
|
| 11 |
|
| 12 |
<img src="https://raw.githubusercontent.com/PolyU-VCLab/PolyU-VCLab.github.io/main/assets/icons/vclab-logo.svg" width="110" alt="VCLab Logo"/>
|
| 13 |
|
| 14 |
+
# PolyU VCLab - Visual Computing Lab
|
| 15 |
|
| 16 |
### Advancing Visual Intelligence for a Better World.
|
| 17 |
|
| 18 |
+
*"Y learning and beyond -- for future visual enhancement and understanding."*
|
| 19 |
|
| 20 |
[](https://polyu-vclab.github.io/)
|
| 21 |
[](https://www4.comp.polyu.edu.hk/~cslzhang/)
|
|
|
|
| 27 |
|
| 28 |
## About the Lab
|
| 29 |
|
| 30 |
+
The **Visual Computing Lab (VCLab)** at **The Hong Kong Polytechnic University**, directed by **Chair Professor Lei Zhang** (IEEE Fellow), works across the full stack of modern visual intelligence: from low-level image and video restoration, through multimodal perception and reasoning, to generative models, 3D reconstruction, new network architectures, and evaluation benchmarks.
|
| 31 |
|
| 32 |
Our work regularly appears at **CVPR, ICCV, ECCV, NeurIPS, ICLR, TPAMI and IJCV**, and many of our methods have been deployed on **hundreds of millions of mobile devices** through our long-term collaboration with **OPPO Research Institute**.
|
| 33 |
|
| 34 |
+
**Office:** PQ816, Department of Computing, PolyU, Hung Hom, Kowloon, Hong Kong
|
| 35 |
**Contact:** `cslzhang [at] comp.polyu.edu.hk`
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
## Six Research Directions
|
| 40 |
|
| 41 |
+
Our publications are organized into six Collections below. Each Collection is kept up-to-date with the latest papers, code and models -- just click to explore.
|
| 42 |
|
| 43 |
| # | Direction | Open |
|
| 44 |
| :-: | :-- | :-- |
|
| 45 |
+
| 01 | **Image / Video Restoration, Enhancement & Quality Assessment** <br/><sub>Real-world super-resolution, denoising, deblurring, HDR, video restoration, perceptual IQA.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/image-video-restoration-enhancement-and-quality-assessment-69e88641cae9ce5e7cd7d102) |
|
| 46 |
+
| 02 | **Multimodal Perception, Understanding & Reasoning** <br/><sub>MLLM-driven visual perception, grounding and reasoning.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/multimodal-perception-understanding-and-reasoning-69e88630a9632f74ae6235a9) |
|
| 47 |
+
| 03 | **Image & Video Synthesis and Generation** <br/><sub>Accelerating, distilling and improving diffusion / AR / DiT generative models.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/image-and-video-synthesis-and-generation-69e88631f170be22735da0ba) |
|
| 48 |
+
| 04 | **3D Perception, Reconstruction & Generation** <br/><sub>Sensing, reconstructing, synthesizing and editing high-fidelity 3D worlds.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/3d-perception-reconstruction-and-generation-69e8863209efbb5392f99651) |
|
| 49 |
+
| 05 | **Architecture & Training Paradigms** <br/><sub>New architectures for ViT / LLM / VLM, and efficient, decentralized training.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/architecture-and-training-paradigms-69e886412df265f56289c0e7) |
|
| 50 |
+
| 06 | **Benchmarks & Datasets** <br/><sub>Evaluation benchmarks and training datasets for the visual computing community.</sub> | [Open Collection](https://huggingface.co/collections/VCLab-HKPU/benchmarks-and-datasets-69e8863311eb2c5cc3ac1d9c) |
|
| 51 |
|
| 52 |
+
> For the **complete publication list**, please visit Prof. Zhang's [publication page](https://www4.comp.polyu.edu.hk/~cslzhang/papers.htm) and the lab's [website](https://polyu-vclab.github.io/).
|
| 53 |
|
| 54 |
---
|
| 55 |
|
|
|
|
| 57 |
|
| 58 |
We are recruiting **PhD students** (jointly trained with OPPO Research Institute), **postdocs**, and **research interns** across all six research directions. Send your CV and a brief statement of interest to **cslzhang [at] comp.polyu.edu.hk**.
|
| 59 |
|
| 60 |
+
[More on our website](https://polyu-vclab.github.io/#join-us)
|
| 61 |
|
| 62 |
---
|
| 63 |
|
| 64 |
<div align="center">
|
| 65 |
|
| 66 |
+
[](https://polyu-vclab.github.io/)
|
| 67 |
[](https://github.com/PolyU-VCLab)
|
| 68 |
[](https://scholar.google.com/citations?user=tAK5l1IAAAAJ)
|
| 69 |
|
| 70 |
+
**(c) PolyU VCLab, The Hong Kong Polytechnic University**
|
| 71 |
|
| 72 |
</div>
|