PHD Defense
PHD Defense
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 1 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overview
3 Contributions
Domain Whitening Transform (CVPR’ 19)
TriGAN for Multi-source Domain Adaptation (MVA’ 21)
Curriculum Graph Co-teaching for Multi-target Domain Adaptation (CVPR’ 21)
Uncertainty-aware Source-free Domain Adaptation (ECCV’ 22)
4 Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 2 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overview
3 Contributions
4 Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 3 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 4 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Domain-Shift Problem
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 5 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Generalization Gap
99.3
20.2 (↓79.7%)
96.7
60.1 (↓37.8%)
Image courtesy: PhotobookUK
Table 1: The generalization gap caused by the ▶ Collecting labelled data from
domain-shift every test domain is prohibitive.
▶ Can we do better?
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 6 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overview
3 Contributions
4 Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 7 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Test
Bed Cycle Kettle ?
▶ Assumptions:
pS (y|x) = pT (y|x)
pS (x) ̸= pT (x)
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 8 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Figure 1: UDA can be addressed under different settings depending upon the assumptions
made on the source and target domains. Each setting has its own challenges.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 9 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Applications
Figure 2: Example scenarios where domain-shift can manifest in real-world applications. UDA
techniques can help tackle such challenges
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 10 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Benchmarks
Digits (5 domains) Visda-C (2 domains) Domain Net (6 domains)
MNIST
Synthetic Images Painting Infograph
USPS
Sketch Clipart
Color-MNIST Real World objects from COCO
CIFAR-STL (2 domains)
Synthetic Digits
STL10
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 11 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overview
3 Contributions
Domain Whitening Transform (CVPR’ 19)
TriGAN for Multi-source Domain Adaptation (MVA’ 21)
Curriculum Graph Co-teaching for Multi-target Domain Adaptation (CVPR’ 21)
Uncertainty-aware Source-free Domain Adaptation (ECCV’ 22)
4 Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 12 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Contributions
The contributions in this doctoral study are all aimed at the following research avenues:
▶ To understand and discover the tenets of overcoming domain-shift in image
classification tasks, starting from single-source single-target DA (STDA) setting.
▶ To improve upon the existing state-of-the-art STDA methods.
▶ To progressively address more practical and real-world DA scenarios.
▶ To explore and bring interesting ideas from the field of machine learning to the
computer vision community, in the context of neural network adaptation.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 13 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 14 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 15 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
where
WB⊤ WB = Σ−1
B , and Ω = (µB , ΣB )
are domain-specific and are domain-agnostic.
▶ WB is the Whitening matrix. (µB , ΣB ) are domain-specific mean and covariance.
γ and β are the learnable parameters as in BN (Ioffe and Szegedy, 2015).
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 16 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 17 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
To better utilize the unlabelled target data, we propose the Min-Entropy Consensus Loss
that optimizes the following simultaneously:
▶ Minimizes the entropy to ensure that the predictor maximally separates the
target data
▶ Minimizes the consistency loss to force the predictor to deliver consistent
predictions for target samples
MEC Loss is given as:
1
ℓt ( xti 1 , xti 2 ) = − arg max log p(y | xti 1 ) + log p(y | xti 2 ) . (3)
2 y ∈Y
where xti 1 and xti 2 are augmented versions of the same target sample xit
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 18 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overall Pipeline
Figure 6: Overview of the proposed deep architecture containing the embedded DWT layers,
which is trained with the Cross Entropy and MEC loss
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 19 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Results on Digits
MNIST USPS SVHN MNIST
Methods USPS MNIST MNIST SVHN
Source Only 78.9 57.1±1.7 60.1±1.1 20.23±1.8
CORAL (Sun et al., 2016) 81.7 - 63.1 -
MMD (Tzeng et al., 2015) 81.1 - 71.1 -
DANN (Ganin et al., 2016) 85.1 73.0±2.0 73.9 35.7
w/o aug
Results on CIFAR-STL
CIFAR-10 STL
Methods STL CIFAR-10
Source Only 60.35 51.88
w/o aug DANN (Ganin et al., 2016) 66.12 56.91
DRCN (Ghifary et al., 2016) 66.37 58.65
AutoDIAL (Carlucci et al., 2017) 79.10 70.15
DWT 79.75±0.25 71.18±0.56
SE a (French et al., 2018) 77.53±0.11 71.65±0.67
w/ aug
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 21 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Results on Office-Home
Ar Ar Ar Cl Cl Cl Pr Pr Pr Rw Rw Rw
Methods Cl Pr Rw Ar Pr Rw Ar Cl Rw Ar Cl Pr Avg
Source Only 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1
DAN (Long and Wang, 2015) 43.6 57.0 67.9 45.8 56.5 60.4 44.0 43.6 67.7 63.1 51.5 74.3 56.3
DANN (Ganin et al., 2016) 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6
JAN (Long et al., 2017) 45.9 61.2 68.9 50.4 59.7 61.0 45.8 43.4 70.3 63.9 52.4 76.8 58.3
SE (French et al., 2018) 48.8 61.8 72.8 54.1 63.2 65.1 50.6 49.2 72.3 66.1 55.9 78.7 61.5
CDAN-RM (Long et al., 2018) 49.2 64.8 72.9 53.8 63.9 62.9 49.8 48.8 71.5 65.8 56.4 79.2 61.6
CDAN-M (Long et al., 2018) 50.6 65.9 73.4 55.7 62.7 64.2 51.8 49.1 74.5 68.2 56.9 80.7 62.8
DWT-MEC 50.3 72.1 77.0 59.6 69.3 70.2 58.3 48.1 77.3 69.3 53.6 82.0 65.6
Table 4: Comparison with the state-of-the-art methods on Office-Home dataset with ResNet-50
as encoder. Art (Ar), Clipart (Cl), Product (Pr) and Real-World (Rw) are the four domains in
the Office-Home dataset.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 22 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Figure 9: Pipeline for unpaired image translation between a pair of domains (autumn and
summer)
Challenges in Multi-source DA with a generative framework:
▶ Number of generators and discriminators scale linearly with number of source
domains.
▶ Can not utilize all the data from different source domains.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 24 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 25 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
S1
Figure 10: The content is represented by the object’s shape - square, circle or triangle. The
domain is represented by holes, 3D objects and skewered. The style is represented by the color.
▶ The encoder sequentially removes style and domain information to obtain
domain invariant representation.
▶ The decoder symmetrically projects them to domain-specific and style-specific
distribution.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 26 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 27 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 28 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 29 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Domain-specific Representation
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 30 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 31 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 32 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Results on Digits-Five
mt, up, sv, sy mm, up, sv, sy mt, mm, sv, sy mt, up, mm, sy mt, up, sv, mm
Settings Methods Avg
→ mm → mt → up → sv → sy
Source Only 63.70±0.83 92.30±0.91 90.71±0.54 71.51±0.75 83.44±0.79 80.33
Source
DAN (Long and Wang, 2015) 67.87±0.75 97.50±0.62 93.49±0.85 67.80±0.84 86.93±0.93 82.72
Combine
DANN (Ganin et al., 2016) 70.81±0.94 97.90±0.83 93.47±0.79 68.50±0.85 87.37±0.68 83.61
Source Only 63.37±0.74 90.50±0.83 88.71±0.89 63.54±0.93 82.44±0.65 77.71
DAN (Long and Wang, 2015) 63.78±0.71 96.31±0.54 94.24±0.87 62.45±0.72 85.43±0.77 80.44
CORAL (Sun et al., 2016) 62.53±0.69 97.21±0.83 93.45±0.82 64.40±0.72 82.77±0.69 80.07
Multi- DANN (Ganin et al., 2016) 71.30±0.56 97.60±0.75 92.33±0.85 63.48±0.79 85.34±0.84 82.01
Source ADDA (Tzeng et al., 2017) 71.57±0.52 97.89±0.84 92.83±0.74 75.48±0.48 86.45±0.62 84.84
DCTN (Xu et al., 2018) 70.53±1.24 96.23±0.82 92.81±0.27 77.61±0.41 86.77±0.78 84.79
M3 SDA (Peng et al., 2019a) 72.82±1.13 98.43±0.68 96.14±0.81 81.32±0.86 89.58±0.56 87.65
StarGAN (Choi et al., 2018) 44.71±1.39 96.26±0.62 55.32±3.71 58.93±1.95 63.36±2.41 63.71
TriGAN 83.20±0.78 97.20±0.45 94.08±0.92 85.66±0.79 90.30±0.57 90.08
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 33 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Results on Office-Caltech
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 34 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Table 7: Ablation analysis of the main TriGAN components using Digits-Five dataset
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 35 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Table 8: Comparison with the state-of-the-art GAN-based methods in the Single-source UDA
setting on Digits. The best number is in bold and the second best is underlined
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 36 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
St rc
S
ou
yl e
e
MNIST
MNIST-M
SVHN
SYNDIGITS
USPS
Figure 15: Generations of TriGAN across different domains of Digits-Five. Leftmost column
shows the source images, one from each domain and the topmost row shows the style image
from the target domain, two from each domain
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 37 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
St rc
S
ou
yl e
e
Caltech
DSLR
Webcam
Amazon
Figure 16: Generations of TriGAN across different domains of Office-Caltech. Leftmost column
shows the source images, one from each domain and the topmost row shows the style image
from the target domain, two from each domain
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 38 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
S
u
ty
rc
le
e
Spring
Summer
Winter
Autumn
Figure 17: Generations of TriGAN across different domains of Alps Seasons. Leftmost column
shows the source images, one from each domain and the topmost row shows the style image
from the target domain, two from each domain
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 39 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Figure 18: Schematic representation of multiple Figure 19: An instance of real-world application
domain-shifts w.r.t.the target in MTDA where MTDA is of significance
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 40 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Why is it challenging?
▶ Multiple and varying domain shifts.
▶ Non-overlapping support with the source.
▶ Problem of negative transfer .
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 41 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 42 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 43 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Figure 21: To prevent error propagation in the GCN, we adopt co-teaching with a dual-head
classifier. It results in improved pseudo-labels for the target samples
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 44 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overall Pipeline
Figure 22: Overall framework of curriculum graph co-teaching (CGCT) for multi-target DA
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 45 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Training Objectives
Training the overall framework includes training the shared feature extractor, domain
discriminator, the MLP classifier Gmlp , and the GCN classifier Ggcn , which is composed
of edge network Fedge and node classifier Fnode . For the target samples:
▶ Fedge is trained with binary cross-entropy loss using pseudo-labels from Gmlp .
▶ Fnode is trained with cross-entropy loss using pseudo-labels from Gmlp .
▶ Gmlp is trained with cross-entropy loss using the pseudo-labels obtained from Ggcn
▶ We also use the adversarial loss in CDAN (Long et al., 2018) to ensure even
better domain alignment.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 46 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Figure 23: Domain Curriculum Learning is based on Easy-to-Hard Domain Selection (EHDS)
strategy. Adaptation process starts with the easiest domain and proceeds to the hardest one.
Expected entropy is used as a metric to sort the domains
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 47 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Results on Digits-Five
Digits-five
Settings Methods MNIST MNIST-M SVHN Syn. Digits USPS Avg (%)
Source Only 26.9 56.0 67.2 73.8 36.9 52.2
ADDA (Tzeng et al., 2017) 43.7 55.9 40.4 66.1 34.8 48.2
DAN (Long and Wang, 2015) 31.3 53.1 48.7 63.3 27.0 44.7
Target GTA (Sankaranarayanan et al., 2018) 44.6 54.5 60.3 74.5 41.3 55.0
Combined DANN (Ganin et al., 2016) 52.4 64.0 65.3 66.6 44.3 58.5
AMEAN (Chen et al., 2019b) 56.2 65.2 67.3 71.3 47.5 61.5
CDAN (Long et al., 2018) 53.0 76.3 65.6 81.5 56.2 66.5
CGCT 54.3 85.5 83.8 87.8 52.4 72.8
CDAN (Long et al., 2018) 53.7 76.2 64.4 80.3 46.2 64.2
Multi-
CDAN + DCL 62.0 87.8 87.8 92.3 63.2 78.6
Target
D-CGCT 65.7 89.0 88.9 93.2 62.9 79.9
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 48 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Table 10: Comparison with the MTDA state-of-the-art methods on Office-31 and Office-Home
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 49 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
DomainNet
Methods Clip Info Paint Quick Draw Real Sketch Avg(%)
Source Only 25.6 16.8 25.8 9.2 20.6 22.3 20.1
SE (French et al., 2018) 21.3 8.5 14.5 13.8 16.0 19.7 15.6
MCD (Saito et al., 2018) 25.1 19.1 27.0 10.4 20.2 22.5 20.7
DADA (Peng et al., 2019b) 26.1 20.0 26.5 12.9 20.7 22.8 21.5
CDAN (Long et al., 2018) 31.6 27.1 31.8 12.5 33.2 35.8 28.7
MCC (Jin et al., 2020) 33.6 30.0 32.4 13.5 28.0 35.3 28.8
CDAN + DCL 35.1 31.4 37.0 20.5 35.4 41.0 33.4
CGCT 36.1 33.3 35.0 10.0 39.6 39.7 32.3
D-CGCT 37.0 32.2 37.3 19.3 39.8 40.8 34.4
Table 11: Comparison with the MTDA state-of-the-art methods on DomainNet. All the
reported numbers are evaluated on the multi-target setting.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 50 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ablation Analysis
Office-Home
Methods Art Clipart Product Real Avg(%)
Source train 51.45 43.93 42.41 54.50 48.07
CDAN (Baseline) 50.70 50.78 47.95 57.63 51.77
Baseline† 52.08 53.21 48.62 58.49 53.10
Baseline† +PL 54.61 56.13 50.25 61.04 55.51
Baseline† + DCL 55.94 56.66 52.85 60.18 56.41
Baseline† +GCN‡ 50.19 49.09 46.52 60.76 51.64
Baseline† +GCN‡ + PL 54.52 57.60 53.20 65.49 57.70
CGCT 60.81 60.00 54.13 62.62 59.39
D-CGCT 61.42 60.73 57.27 63.8 60.81
Table 12: Ablation analysis on Office-Home. Baseline: CDAN model that combines all the
target domains. “†”: the baseline models that use the domain labels of the target. GCN‡: the
baseline model with the GCN as the single classification head. PL: using pseudo-labels.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 51 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ablation Analysis
Office-Home Digits-five
70 100
Reverse-domain Curriculum
Domain-aware Curriculum
Accuracy (%)
60 85
Accuracy (%)
50 70
40 55
30 40
Art Clipart Product Real mt mm svhn syn usps
Source Domain Source Domain
Figure 24: Comparison of the DCL with the reverse-domain curriculum model on Office-Home
and Digits-Five. In the reverse-domain curriculum model the order of selection of target
domains is exactly opposite to that of the DCL model.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 52 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 53 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 54 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overconfident Predictions
Hein et al. (2019) showed that ReLU networks due to the extrapolation property tend
to produce high confident predictions far from training data points.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 55 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 56 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 57 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Laplace Approximation
▶ Instead of a point estimate θ, we want a posterior distribution over the model
p(θ) p(D|θ)
parameters p(θ | D) = p(D)
▶ Prediction requires marginalization over the weights:
Z
p(yk | x, D) = ϕk (fθ (x)) p(θ | D) dθ. (15)
θ
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 58 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Laplace Approximation
▶ A trained neural network gives the local maximum as θMAP .
▶ LA uses the second-order Taylor expansion around θMAP to give an approximated
multivariate Gaussian distribution as:
where the covariance matrix is given by inverse of the Hessian H of the negative
log-posterior H := −∇2θ log p(θ | D) |θMAP
▶ Now we can sample from the posterior to make predictions:
Z
p(yk | x, D[S] ) ≈ ϕk (fθ (x)) N(θ | θMAP , H−1 ) dθ. (17)
θ
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 59 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
M
1 X
p(yk | z[T] ) ≈ ϕk hθj (z[T] ) ,
M
j=1
(18)
Figure 32: Feature extractor g is trained under
▶ Uncertainty weights are computed as
a uncertainty-aware composite loss that weighs
samples according to predictive uncertainty wi = exp(−H), where H is entropy.
K
Lug
X
ent = −Ep(x[T] ) w ϕk (f (x[T] )) log ϕk (f (x[T] )). (19)
k=1
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 61 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
MAP
Uncertainty-
guided
Figure 33: Comparison of conventional IM (MAP) with our uncertainty-guided IM. The solid
vs. hollow circles represent the source and the target data, respectively
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 62 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
MAP
Uncertainty-
guided
Figure 34: Under strong domain-shift, IM, when used with a MAP estimate, finds a completely
flipped decision boundary. U-SFAN finds the decision boundary by down-weighting the far away
target data
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 63 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Methods A→C A→P A→R C→A C→P C→R P→A P→C P→R R→A R→C R→P Avg.
Source Only 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1
DANN (Ganin et al., 2016) 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6
DWT-MEC (Roy et al., 2019) 50.3 72.1 77.0 59.6 69.3 70.2 58.3 48.1 77.3 69.3 53.6 82.0 65.6
CDAN (Long et al., 2018) 50.7 70.6 76.0 57.6 70.0 70.0 57.4 50.9 77.3 70.9 56.7 81.6 65.8
SAFN (Xu et al., 2019) 52.0 71.7 76.3 64.2 69.9 71.9 63.7 51.4 77.1 70.9 57.1 81.5 67.3
LSC (Yang et al., 2021) 57.9 78.6 81.0 66.7 77.2 77.2 65.6 56.0 82.2 72.0 57.8 83.4 71.3
SHOT-IM (Liang et al., 2020) 55.4 76.6 80.4 66.9 74.3 75.4 65.6 54.8 80.7 73.7 58.4 83.4 70.5
U-SFAN 58.5 78.6 81.1 66.6 75.2 77.9 66.3 57.9 80.6 73.6 61.4 84.1 71.8
A2 Net (Xia et al., 2021) 58.4 79.0 82.4 67.5 79.3 78.9 68.0 56.2 82.9 74.1 60.5 85.0 72.8
SHOT (Liang et al., 2020) 57.1 78.1 81.5 68.0 78.2 78.1 67.4 54.9 82.2 73.3 58.8 84.3 71.8
U-SFAN+ 57.8 77.8 81.6 67.9 77.3 79.2 67.2 54.7 81.2 73.3 60.3 83.9 71.9
SHOT++ (Liang et al., 2021) 57.9 79.7 82.5 68.5 79.6 79.3 68.5 57.0 83.0 73.7 60.7 84.9 73.0
Table 13: Comparison with the SFDA state-of-the-art onOffice-Home for the closed-set setting.
A2 Net (Xia et al., 2021) and SHOT++ (Liang et al., 2021) use a multitude of training losses
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 64 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 65 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Methods A→C A→P A→R C→A C→P C→R P→A P→C P→R R→A R→C R→P Avg.
Source Only 53.4 52.7 51.9 69.3 61.8 74.1 61.4 64.0 70.0 78.7 71.0 74.9 65.3
ATI-λ (Busto and Gall, 2017) 55.2 52.6 53.5 69.1 63.5 74.1 61.7 64.5 70.7 79.2 72.9 75.8 66.1
OpenMax (Bendale and Boult, 2016) 56.5 52.9 53.7 69.1 64.8 74.5 64.1 64.0 71.2 80.3 73.0 76.9 66.7
STA (Liu et al., 2019) 58.1 53.1 54.4 71.6 69.3 81.9 63.4 65.2 74.9 85.0 75.8 80.8 69.5
SHOT-IM (Liang et al., 2020) 62.5 77.8 83.9 60.9 73.4 79.4 64.7 58.7 83.1 69.1 62.0 82.1 71.5
SHOT (Liang et al., 2020) 64.5 80.4 84.7 63.1 75.4 81.2 65.3 59.3 83.3 69.6 64.6 82.3 72.8
U-SFAN 62.9 77.9 84.0 67.9 74.6 79.6 68.8 61.3 83.3 76.0 63.9 82.3 73.5
Table 16: Comparison with the SFDA state-of-the-art on Office-Home for the open-set setting.
Open-set (OS) classification accuracy, which also includes the accuracy on the unknown class,
has been reported
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 66 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Overview
3 Contributions
4 Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 67 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Conclusions
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 68 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Future Works
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 69 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Roadmap of Ph.D.
2018 2019 2020 2021 2022/23
Under
CVPR ICIAP MVA CVPR ECCV WACV
Review
(DWT) (F2WCT) (TriGAN) (CGCT) (U-SFAN) (CoaST)
(WACV)
Domain Adaptation
Under
Review
Start (ICLR)
ECCV
(FRoST)
Continual Learning
CVPR
(NCL)
November
Novel Class Discovery
Legends
Not in Thesis
Miscellaneous
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 70 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Publications
1. Yangsong Zhang, Subhankar Roy, Hongtao Lu, Elisa Ricci, Stéphane Lathuilière.
“Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation”. In
WACV, 2023.
2. Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, Elisa Ricci.
“Class-incremental Novel Class Discovery”. In ECCV, 2022.
3. Subhankar Roy, Martin Trapp, Andrea Pilzer, Juho Kannala, Nicu Sebe, Elisa
Ricci, Arno Solin. “Uncertainty-guided Source-free Domain Adaptation”. In
ECCV, 2022.
4. Subhankar Roy, Evgeny Krivosheev, Zhun Zhong, Nicu Sebe, Elisa Ricci.
“Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation”. In CVPR,
2021.
5. Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, Nicu Sebe.
“Neighborhood Contrastive Learning for Novel Class Discovery”. In CVPR, 2021.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 71 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Publications
6. Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulo, Nicu
Sebe, Elisa Ricci. “TriGAN: Image-to-Image Translation for Multi-Source Domain
Adaptation”. In MVAP, 2021.
7. Aliaksandr Siarohin, Subhankar Roy, Stéphane Lathuilière, Sergey Tulyakov, Elisa
Ricci, Nicu Sebe. “Motion-supervised Co-Part Segmentation”. In ICPR, 2021.
8. Subhankar Roy, Willi Menapace, Sebastiaan Oei, Ben Luijten, Enrico Fini,
Cristiano Saltori, Iris Huijben, Nishith Chennakeshava, Federico Mento,
Alessandro Sentelli, Emanuele Peschiera, Riccardo Trevisan, Giovanni Maschietto,
Elena Torri, Riccardo Inchingolo, Andrea Smargiassi, Gino Soldati, Paolo Rota,
Andrea Passerini, Ruud JG Van Sloun, Elisa Ricci, Libertario Demi. “Deep
learning for classification and localization of COVID-19 markers in point-of-care
lung ultrasound”. In TMI, 2020.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 72 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Publications
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 73 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Collaborators
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 74 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 75 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References I
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and
Jennifer Wortman Vaughan. A theory of learning from different domains. Machine
learning, 79(1):151–175, 2010.
Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In Proceedings
of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pages
1563–1572, 2016.
David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review
for statisticians. Journal of the American statistical Association, 112(518):859–877,
2017.
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip
Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial net-
works. In CVPR, 2017.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 76 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References II
Pau Busto and Juergen Gall. Open set domain adaptation. In Proceedings of the IEEE
International Conference on Computer Vision (ICCV), pages 754–763, 2017.
Nitay Calderon, Eyal Ben-David, Amir Feder, and Roi Reichart. Docogen: Domain
counterfactual generation for low resource domain adaptation. In Smaranda Muresan,
Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meet-
ing of the Association for Computational Linguistics (Volume 1: Long Papers), ACL
2022, Dublin, Ireland, May 22-27, 2022, pages 7727–7746. Association for Computa-
tional Linguistics, 2022. URL https://aclanthology.org/2022.acl-long.533.
Fabio Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, and Samuel Rota Bulo. Au-
todial: Automatic domain alignment layers. In Proceedings of the IEEE international
conference on computer vision, pages 5067–5075, 2017.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 77 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References III
Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs.
discriminability: Batch spectral penalization for adversarial domain adaptation. In Pro-
ceedings of the International Conference on Machine Learning (ICML), pages 1081–
1090, 2019a.
Ziliang Chen, Jingyu Zhuang, Xiaodan Liang, and Liang Lin. Blending-target domain
adaptation by adversarial meta-adaptation networks. In Proc. CVPR, 2019b.
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul
Choo. StarGAN: Unified generative adversarial networks for multi-domain image-to-
image translation. In CVPR, 2018.
Gabriela Csurka. A comprehensive survey on domain adaptation for visual applications.
Domain Adaptation in Computer Vision Applications, pages 1–35, 2017.
Geoff French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for visual domain
adaptation. ICLR, 2018.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 78 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References IV
Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing
model uncertainty in deep learning. In Proceedings of the International Conference
on Machine Learning (ICML), pages 1050–1059, 2016.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle,
François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial train-
ing of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li.
Deep reconstruction-classification networks for unsupervised domain adaptation. In
ECCV, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for
image recognition. In CVPR, 2016.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 79 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References V
Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield
high-confidence predictions far away from the training data and how to mitigate the
problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pages 41–50, 2019.
Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko,
Alexei A Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain
adaptation. In ICML, 2017.
Haoshuo Huang, Qixing Huang, and Philipp Krahenbuhl. Domain transfer through deep
activation matching. In ECCV, 2018.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network
training by reducing internal covariate shift. In ICML, 2015.
Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion
for versatile domain adaptation. In Proc. ECCV, 2020.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 80 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References VI
Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being Bayesian, even just a bit,
fixes overconfidence in relu networks. In Proceedings of the International Conference
on Machine Learning (ICML), pages 5436–5446, 2020.
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable
predictive uncertainty estimation using deep ensembles. In Advances in Neural Infor-
mation Processing Systems (NeurIPS), 2017.
Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation:
Unsupervised domain adaptation without source data. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), pages 9641–9650,
2020.
Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Uni-
versal style transfer via feature transforms. Advances in neural information processing
systems, 30, 2017.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 81 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References VII
Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data?
source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the
International Conference on Machine Learning (ICML), pages 6028–6039, 2020.
Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent
unsupervised domain adaptation through hypothesis transfer and labeling transfer.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single
and multiple black-box predictors. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pages 8003–8013, 2022.
Hong Liu, Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Qiang Yang. Separate
to adapt: Open set domain adaptation via progressive separation. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages
2927–2936, 2019.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 82 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References VIII
Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NeurIPS,
2016.
Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation
networks. In NeurIPS, 2017.
Mingsheng Long and Jianmin Wang. Learning transferable features with deep adaptation
networks. In ICML, 2015.
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning
with joint adaptation networks. ICML, 2017.
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional
adversarial domain adaptation. arXiv:1705.10667v2, 2018.
Zak Murez, Soheil Kolouri, David Kriegman, Ravi Ramamoorthi, and Kyungnam Kim.
Image to image translation for domain adaptation. In CVPR, 2018.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 83 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References IX
Radford M Neal. Bayesian Learning for Neural Networks. Springer Science & Business
Media, 2012.
Le Thanh Nguyen, Madhu Kiran, Jose Dolz, Eric Granger, Atif Bela, and Louis-Antoine
Blais-Morin. Unsupervised multi-target domain adaptation through knowledge distil-
lation. In Proc. WACV, 2021.
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Mo-
ment matching for multi-source domain adaptation. ICCV, 2019a.
Xingchao Peng, Zijun Huang, Ximeng Sun, and Kate Saenko. Domain agnostic learning
with disentangled representations. In ICML, 2019b.
Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, and Judy Hoffman. Active domain
adaptation via clustering uncertainty-weighted embeddings. In Proceedings of the
IEEE/CVF International Conference on Computer Vision, pages 8505–8514, 2021.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 84 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References X
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer.
High-resolution image synthesis with latent diffusion models, 2021.
Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulo, Nicu Sebe, and
Elisa Ricci. Unsupervised domain adaptation using feature-whitening and consensus
loss. CVPR, 2019.
Paolo Russo, Fabio Maria Carlucci, Tatiana Tommasi, and Barbara Caputo. From source
to target and back: symmetric bi-directional adaptive gan. In CVPR, 2018.
Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum
classifier discrepancy for unsupervised domain adaptation. In Proc. CVPR, 2018.
Swami Sankaranarayanan, Yogesh Balaji, Carlos D Castillo, and Rama Chellappa. Gen-
erate to adapt: Aligning domains using generative adversarial networks. In CVPR,
2018.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 85 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References XI
Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening and Coloring batch
transform for GANs. In ICLR, 2019.
Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adap-
tation. In AAAI, 2016.
Luke Tierney and Joseph B Kadane. Accurate approximations for posterior moments and
marginal densities. Journal of the American Statistical Association, 81(393):82–86,
1986.
Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, 2011.
Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer
across domains and tasks. In ICCV, 2015.
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative
domain adaptation. In CVPR, 2017.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 86 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References XII
Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The
missing ingredient for fast stylization. arXiv:1607.08022, 2016.
Riccardo Volpi, Pau De Jorge, Diane Larlus, and Gabriela Csurka. On the road to
online adaptation for semantic image segmentation. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, pages 19184–19195, 2022.
Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. Continual test-time domain
adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 7201–7211, 2022.
Haifeng Xia, Handong Zhao, and Zhengming Ding. Adaptive adversarial network for
source-free domain adaptation. In Proceedings of the IEEE International Conference
on Computer Vision (ICCV), pages 9010–9019, 2021.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 87 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References XIII
Ruijia Xu, Ziliang Chen, Wangmeng Zuo, Junjie Yan, and Liang Lin. Deep cocktail
network: Multi-source unsupervised domain adaptation with category shift. In CVPR,
2018.
Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An
adaptive feature norm approach for unsupervised domain adaptation. In Proceedings
of the IEEE International Conference on Computer Vision (ICCV), pages 1426–1435,
2019.
Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. Gen-
eralized source-free domain adaptation. In Proceedings of the IEEE International
Conference on Computer Vision (ICCV), pages 8978–8987, 2021.
Xu Yang, Cheng Deng, Tongliang Liu, and Dacheng Tao. Heterogeneous graph attention
network for unsupervised multiple-target domain adaptation. TPAMI, 2020.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 88 / 89
The Need to Adapt Neural Networks Unsupervised Domain Adaptation Contributions Conclusions References
References XIV
Chun-Han Yao, Boqing Gong, Hang Qi, Yin Cui, Yukun Zhu, and Ming-Hsuan Yang.
Federated multi-target domain adaptation. In Proceedings of the IEEE/CVF Winter
Conference on Applications of Computer Vision, pages 1424–1433, 2022.
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image
translation using cycle-consistent adversarial networks. In CVPR, 2017.
Ph.D. Candidate: Subhankar Roy Advisors: Prof. Elisa Ricci and Prof.
Università
Nicu Sebedegli studi di Trento 89 / 89