Follow
Ikko Yamane
Ikko Yamane
ENSAI/CREST
Verified email at ensai.fr - Homepage
Title
Cited by
Cited by
Year
Do we need zero training loss after achieving zero training error?
T Ishida, I Yamane, T Sakai, G Niu, M Sugiyama
arXiv preprint arXiv:2002.08709, 2020
1292020
A one-step approach to covariate shift adaptation
T Zhang, I Yamane, N Lu, M Sugiyama
Asian Conference on Machine Learning, 65-80, 2020
182020
Uplift modeling from separate labels
I Yamane, F Yger, J Atif, M Sugiyama
Advances in Neural Information Processing Systems 31, 2018
182018
Multitask principal component analysis
I Yamane, F Yger, M Berar, M Sugiyama
Asian Conference on Machine Learning, 302-317, 2016
102016
Is the performance of my deep network too good to be true? A direct approach to estimating the Bayes error in binary classification
T Ishida, I Yamane, N Charoenphakdee, G Niu, M Sugiyama
arXiv preprint arXiv:2202.00395, 2022
62022
Regularized multitask learning for multidimensional log-density gradient estimation
I Yamane, H Sasaki, M Sugiyama
Neural Computation 28 (7), 1388-1410, 2016
62016
Regularized multi-task learning for multi-dimensional log-density gradient estimation
I Yamane, H Sasaki, M Sugiyama
arXiv preprint arXiv:1508.00085, 2015
22015
Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality
I Yamane, Y Chevaleyre, T Ishida, F Yger
International Conference on Artificial Intelligence and Statistics, 4768-4801, 2023
12023
Mediated uncoupled learning: Learning functions without direct input-output correspondences
I Yamane, J Honda, F Yger, M Sugiyama
International Conference on Machine Learning, 11637-11647, 2021
12021
A Fourier-Analytic Approach to List-Decoding for Sparse Random Linear Codes
A Kawachi, I Yamane
IEICE TRANSACTIONS on Information and Systems 98 (3), 532-540, 2015
12015
Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling
F Portier, L Truquet, I Yamane
arXiv preprint arXiv:2312.09969, 2023
2023
Skew-symmetrically perturbed gradient flow for convex optimization
F Futami, T Iwata, N Ueda, I Yamane
Asian Conference on Machine Learning, 721-736, 2021
2021
Regularized multi-task learning for multi-dimensional log-density gradient estimation
I Yamane, H Sasaki, M Sugiyama
IEICE Technical Report; IEICE Tech. Rep. 114 (306), 177-183, 2014
2014
The system can't perform the operation now. Try again later.
Articles 1–13