Arşiv logosu
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • English
  • Türkçe
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Wang, Kangning" seçeneğine göre listele

Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Yükleniyor...
    Küçük Resim
    Öğe
    Convolution smoothing and non-convex regularization for support vector machine in high dimensions
    (Elsevier, 2024) Wang, Kangning; Yang, Junning; Polat, Kemal; Alhudhaif, Adi; Sun, Xiaofei
    The support vector machine (SVM) is a well-known statistical learning tool for binary classification. One serious drawback of SVM is that it can be adversely affected by redundant variables, and research has shown that variable selection is crucial and necessary for achieving good classification accuracy. Hence some SVM variable selection studies have been devoted, and they have an unified empirical hinge loss plus sparse penaltyformulation. However, a noteworthy issue is the computational complexity of existing methods is high especially for large-scale problems, due to the non-smoothness of the hinge loss. To solve this issue, we first propose a convolution smoothing approach, which turns the non-smooth hinge loss into a smooth surrogate one, and they are asymptotically equivalent. Moreover, we construct computationally more efficient SVM variable selection procedure by implementing non-convex penalized convolution smooth hinge loss. In theory, we prove that the resulting variable selection possesses the oracle property when the number of predictors is diverging. Numerical experiments also confirm the good performance of the new method.
  • Yükleniyor...
    Küçük Resim
    Öğe
    Smooth quantile regression and distributed inference for non-randomly stored big data
    (Pergamon-Elsevier Science Ltd, 2023) Wang, Kangning; Jia, Jiaojiao; Polat, Kemal; Sun, Xiaofei; Alhudhaif, Adi; Alenezi, Fayadh
    In recent years, many distributed algorithms towards big data quantile regression have been proposed. However, they all rely on the data are stored in random manner. This is seldom in practice, and the violation of this assumption can seriously degrade their performance. Moreover, the non-smooth quantile loss brings inconvenience in both computation and theory. To solve these issues, we first propose a convex and smooth quantile loss, which converges to the quantile loss uniformly. Then a novel pilot sample surrogate smooth quantile loss is constructed, which can realize communication-efficient distributed quantile regression, and overcomes the non-randomly distributed nature of big data. In theory, the estimation consistency and asymptotic normality of the resulting distributed estimator are established. The theoretical results guarantee that the new method is adaptive to the situation where the data are stored in any arbitrary way, and can work well just as all the data were pooled on a single machine. Numerical experiments on both synthetic and real data verify the good performance of the new method.

| Bolu Abant İzzet Baysal Üniversitesi | Kütüphane | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Bolu Abant İzzet Baysal Üniversitesi Kütüphanesi, Bolu, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2025 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim