畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)

上傳人:仙*** 文檔編號(hào):29873488 上傳時(shí)間:2021-10-08 格式:DOC 頁(yè)數(shù):14 大?。?91.50KB
收藏 版權(quán)申訴 舉報(bào) 下載
畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)_第1頁(yè)
第1頁(yè) / 共14頁(yè)
畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)_第2頁(yè)
第2頁(yè) / 共14頁(yè)
畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)_第3頁(yè)
第3頁(yè) / 共14頁(yè)

下載文檔到電腦,查找使用更方便

15 積分

下載資源

還剩頁(yè)未讀,繼續(xù)閱讀

資源描述:

《畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)》由會(huì)員分享,可在線閱讀,更多相關(guān)《畢業(yè)設(shè)計(jì)論文 外文文獻(xiàn)翻譯 中英文對(duì)照 快速和簡(jiǎn)單 的溝通使用Co – SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)(14頁(yè)珍藏版)》請(qǐng)?jiān)谘b配圖網(wǎng)上搜索。

1、英 文 翻 譯題 目:Rapid and brief communication Active learning for image retrieval with Co-SVM 專 業(yè) 班 級(jí): 學(xué) 號(hào): 姓 名: 指 導(dǎo) 教 師: 學(xué) 院 名 稱: 13快速和簡(jiǎn)單的溝通使用Co SVM對(duì)圖像檢索進(jìn)行主動(dòng)學(xué)習(xí)摘 要在相關(guān)反饋算法中,選擇性抽樣通常用來減少標(biāo)簽成本以及探討未標(biāo)記的數(shù)據(jù)。在本文中,為了提高圖像檢索選擇性抽樣的表現(xiàn),我們提出了一個(gè)積極的學(xué)習(xí)算法,這個(gè)算法被稱為Co SVM。在Co SVM算法中,色彩和紋理很自然的當(dāng)做一幅圖像足夠的、不相關(guān)的試圖。我們能夠分別在顏色和紋理的特征子空間上

2、學(xué)習(xí)SVM分類器。因此,這兩個(gè)分類器被用于分類未標(biāo)記的數(shù)據(jù)。當(dāng)兩個(gè)分類器分辨出不同時(shí),這些未標(biāo)記的數(shù)據(jù)就成為了選擇標(biāo)簽。實(shí)驗(yàn)結(jié)果表明我們提出的這種算法對(duì)圖像檢索是非常有利的。1 前 言相關(guān)反饋是提高圖像檢索性能的一種重要的方法1。對(duì)于企業(yè)大型圖像數(shù)據(jù)庫(kù)檢索的問題,標(biāo)記圖像總是比未標(biāo)記的圖像罕見。當(dāng)只提供一個(gè)小的標(biāo)記圖像時(shí),如何利用大量未標(biāo)記的圖像去增強(qiáng)學(xué)習(xí)算法的性能已成為一個(gè)熱門話題。Tong 和Chang提出了一種主動(dòng)學(xué)習(xí)的算法,這種算法叫做SVMAct ive 2。他們認(rèn)為處于邊界的樣本是最豐富的。因此在每一輪相關(guān)反饋中,能作為標(biāo)簽返回給用戶的是那些最接近支持矢量邊界的圖像。通常情況下,圖

3、像的特征表示是一個(gè)組合的,多樣化的功能,如顏色,紋理,形狀等。例如,對(duì)于指定的圖像,不同特征的重要性是顯著不同的。另一方面,重要性相同的特征對(duì)于不同圖像也是不同的。例如,通常情況下,顏色是比形狀更為突出的圖像。然而,檢索結(jié)果是所有特征的平均作用,而忽略了個(gè)別特征鮮明的特性。一些研究顯示多視角的學(xué)習(xí)比單視圖的假說要好3,4。在本文中,我們把顏色和紋理作為圖像描述的兩個(gè)充分的、不相關(guān)的圖像特征。受SVMAct ive的啟發(fā),我們提出了一種新的主動(dòng)學(xué)習(xí)法,這種方法叫做CoSVM。首先,在不同的特征表示上分別學(xué)習(xí)SVM分類器,然后這些分類器用來從未標(biāo)記的數(shù)據(jù)中選擇最翔實(shí)的合作樣本,最后,這些翔實(shí)樣本將

4、作為標(biāo)簽返回給用戶。2 支持向量機(jī)作為一個(gè)有效的二元分類器,將SVM用于圖像檢索相關(guān)反饋的分類是特別適合的5。隨著標(biāo)簽圖像,SVM學(xué)習(xí)一個(gè)邊界(即超平面),就是從帶有最大利潤(rùn)的不相關(guān)的圖片中分離相關(guān)圖像。處于邊界一側(cè)的圖像被認(rèn)為是相關(guān)的,而處于另一側(cè)的則被認(rèn)為是不相關(guān)的。給定一個(gè)標(biāo)記圖像集(x1,y1), . . . ,(xn,yn) , xi 是一幅圖像的特征描述,yi 1,+1 是類標(biāo)簽(- 1表示正極,+1表示負(fù)極)。訓(xùn)練SVM分類器會(huì)導(dǎo)致下面的二次優(yōu)化問題:S.t:其中C是一個(gè)常數(shù),k為內(nèi)核的功能。邊界(超平面)是其中滿足任何支持向量的條件是:該分類函數(shù)可以寫為:3 合作支持向量機(jī)3.

5、1. 雙試圖計(jì)劃假設(shè)圖像的顏色特征和紋理特征是兩個(gè)互不相關(guān)的觀點(diǎn)是自然的也是合理的。假設(shè)x=c1, . . . ,ci , t1 , . . . ,tj 是一幅圖像的特征表示,其中 c1, . . . ,ci和 t1, . . . ,tj 分別是顏色屬性和紋理屬性。為簡(jiǎn)單起見,我們定義特征表示空間V = VC VT , 而c1, . . . ,ciVC , t1, . . . ,tj VT。為了盡可能找到相關(guān)的圖像,像一般相關(guān)反饋的方法,在第一階段的聯(lián)合視圖V中支持向量機(jī)用于學(xué)習(xí)標(biāo)記的樣本分類h。通過h未標(biāo)記集被分為正面和負(fù)面的.然后m的正面形象將返回到用戶的標(biāo)簽。在第二個(gè)階段,通過VC顏色視

6、圖和VT紋理視圖SVM用于在標(biāo)記樣本上分別學(xué)習(xí)hc和ht兩種分類器 。對(duì)于兩個(gè)分類有分歧的未標(biāo)記的樣本推薦給用戶做標(biāo)簽,并將其命名為爭(zhēng)奪樣本。也就是說,爭(zhēng)論樣本以HC(CP)的分類劃分為陽(yáng)性,以HT(TN)的分類劃分為陰性?;蛞訦C(CN)的分類劃分為陰性,以HT(TP)的分類劃分為陽(yáng)性。對(duì)于每一個(gè)分類器,樣品之間的超平面(邊界)的距離可以被看作信心程度。越大的距離,越高的信任度。為了確保用戶可以標(biāo)簽最翔實(shí)的樣本,在兩種意見上接近超平面的樣本被推薦給用戶作為標(biāo)簽。3.2. 多視圖計(jì)劃在兩個(gè)個(gè)案中,提出的算法很容易擴(kuò)展到多視圖計(jì)劃。假設(shè),一個(gè)是彩色圖像特征被表示為V = V2的 V1的 Vk,鉀

7、 2條所界定,每個(gè)VI,i= 1,鉀對(duì)應(yīng)的彩色圖有不同的看法。然后在每一個(gè)視圖上可以學(xué)習(xí)K向量機(jī)分類。所有未標(biāo)記的數(shù)據(jù)被 k 支持向量機(jī)分類器歸類為陽(yáng)性 (+ 1) 或陰性 (1)。定義置信度D(x) _ki=1sign(hi(x) _。置信度可以反映上一示例中指定的所有分類器的一致性。置信度越高,分類器越一致。相反,低置信度說明分類器是不確定的。這些不確定的樣本標(biāo)簽將導(dǎo)致性能的最大改進(jìn)。因此,其信任度是最低的未標(biāo)記的樣本被視為爭(zhēng)奪樣本。3.3. SVM 簡(jiǎn)介SVM( Support Vector machine, 支持向量機(jī)) 方法4是建立在統(tǒng)計(jì)學(xué)習(xí)理論的VC 維理論和結(jié)構(gòu)風(fēng)險(xiǎn)最小原理基礎(chǔ)上

8、的,根據(jù)有限的樣本信息在模型的復(fù)雜性和學(xué)習(xí)能力之間尋求最佳的折衷, 以期獲得最好的推廣能力。SVM的主要思想是建立一個(gè)超平面作為決策曲面, 使得正例和反例之間的隔離邊緣被最大化。對(duì)于二維線性可分情況, 令H 為把兩類訓(xùn)練樣本沒有錯(cuò)誤地分開的分類線, H1, H2分別為過各類中離分類線最近的樣本且平行于分類線的直線, 它們之間的距離叫做分類間隔。所謂最優(yōu)分類線就是要求分類線不但能將兩類正確分開, 而且使分類間隔最大。在高維空間, 最優(yōu)分類線就成為最優(yōu)分類面。實(shí)驗(yàn):為了驗(yàn)證了在性能上改進(jìn)算法的有效性,我們將它與Tong & Chang SVMAct ive及傳統(tǒng)的相關(guān)反饋算法支持向量機(jī)進(jìn)行比較。C

9、orel 圖像光盤從所選子集中執(zhí)行實(shí)驗(yàn)。在我們的子集中有 50 個(gè)類別。每個(gè)類別包含100個(gè)圖像,一共有5000個(gè)圖像。該類別有不同的語(yǔ)義,如動(dòng)物,建筑,景觀等的含義。我們的實(shí)驗(yàn)的主要目的是驗(yàn)證聯(lián)合支持向量機(jī)的學(xué)習(xí)機(jī)制是否有用,因此,我們只用來簡(jiǎn)單的顏色和紋理特征來表示圖RGB 顏色特征包括 125 維顏色直方圖矢量和 6 維顏色矩矢量。像。紋理特征提取使用 3 級(jí)離散小波變換 (DWT)。均值和方差平均每10個(gè)子帶被排列成20維紋理特征矢量。在支持向量機(jī)分類器中采用徑向基核。該內(nèi)核寬度是由交叉驗(yàn)證的方法得到的。每個(gè)類別500個(gè)圖像的前10 個(gè)形象被選擇作為查詢圖像來探測(cè)檢索性能。在每一輪中,

10、只有前10名的圖像標(biāo)記和10個(gè)最不自信的圖片集爭(zhēng)奪選定的標(biāo)簽。以下文本中的所有精度都為平均測(cè)試的所有圖像的準(zhǔn)確性。第三輪及第五輪相關(guān)反饋后2和3是描述3種算法的相關(guān)反饋后準(zhǔn)確性的范圍曲線。從比較的結(jié)果中,我們可以看到擬議的算法 (聯(lián)合-支持向量機(jī)) 勝于 SVMAct (活動(dòng)支持向量機(jī)) 和傳統(tǒng)的相關(guān)反饋方法 (支持向量機(jī))。此外,我們?cè)谡{(diào)查前100名中前10名各種算法的準(zhǔn)確性并有五輪的反饋。由于空間有限我們只分別在圖片1和圖片2中表示了前 30 和前 50的結(jié)果。圖一 前30名的平均圖像檢索 圖二前30名的平均圖像檢索5 相關(guān)作品:co-training 3 和 co-testing 4是兩

11、種典型的多視點(diǎn)學(xué)習(xí)算法 。co-training 算法采用合作學(xué)習(xí)策略,要求這兩種視圖的數(shù)據(jù)是兼容和冗余的。我們?cè)鴩L試結(jié)合 co-training增加顏色和紋理分類器的性能,但結(jié)果卻更糟??紤]到 co-training 的狀況,人們會(huì)很自然的發(fā)現(xiàn)顏色屬性和紋理屬性是一幅彩色圖像不兼容的并且不相關(guān)的屬性。相比之下,co-testing 的要求圖像應(yīng)該是兼容并且相關(guān)的,使分類器能更獨(dú)立的分類。Tong和Chang首先推出的 SVMAct 2 的是主動(dòng)學(xué)習(xí)關(guān)于圖像檢索相關(guān)反饋的方法。他們認(rèn)為在處于邊界的示例可以盡快減少版本空間,即消除假說。因此,每次的相關(guān)反饋,最接近該超平面的圖像會(huì)作為標(biāo)記返回給用

12、戶。SVMActive 是在單一視圖的情況下對(duì)版本空間最小化最佳的。建議的算法可以被認(rèn)為是 SVMActive 在多個(gè)視圖的情況下的擴(kuò)展。6 總結(jié)在這篇文章中,我們建議積極學(xué)習(xí)相關(guān)反饋中的選擇性的抽樣算法聯(lián)合支持向量機(jī)。為了提高性能,相關(guān)反饋分為兩個(gè)階段。第一階段我們通過未標(biāo)記的圖像的相似性查詢排名,并讓用戶可以像常見的相關(guān)反饋算法那樣標(biāo)簽頂部圖像。在第二階段,為了減少標(biāo)簽規(guī)定,只有一組內(nèi)容最豐富的示例被聯(lián)合-支持向量機(jī)所選擇作為標(biāo)簽。實(shí)驗(yàn)結(jié)果顯示聯(lián)合-支持向量機(jī)與SVMActive和沒有主動(dòng)學(xué)習(xí)的傳統(tǒng)相關(guān)反饋算法相比,有明顯的改善。鳴謝,第一作者被授予諾基亞博士后獎(jiǎng)學(xué)金。參考資料1 Y. R

13、ui, T.S. Huang, S.F. Chang, Image retrieval: current techniques,promising directions and open issues, J. Visual Commun. ImageRepresentation 10 (1999) 3962.2 S. Tong, E. Chang, Support vector machine active learning for imageretrieval, in: Proceedings of the Ninth ACM International Conferenceon Multi

14、media, 2001, pp. 107118.3 A. Blum, T. Mitchell, Combining labeled and unlabeled data withco-training, in: Proceedings of the 11th Annual Conference onComputational Learning Theory, 1998, pp. 92100.4 I. Muslea, S. Minton, C.A. Knoblock, Selective sampling withredundant views, in: Proceedings of the 1

15、7th National Conference onArtificial Intelligence, 2000, pp. 621626.5 V. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.Rapid and brief communicationActive learning for image retrieval with Co-SVMAbstractIn relevance feedback algorithms, selective sampling is often used to reduce the cos

16、t of labeling and explore the unlabeled data. In this paper, we proposed an active learning algorithm, Co-SVM, to improve the performance of selective sampling in image retrieval. In Co-SVM algorithm, color and texture are naturally considered as sufficient and uncorrelated views of an image. SVM cl

17、assifiers are learned in color and texture feature subspaces, respectively. Then the two classifiers are used to classify the unlabeled data. These unlabeled samples which are differently classified by the two classifiers are chose to label. The experimental results show that the proposed algorithm

18、is beneficial to image retrieval.1. IntroductionRelevance feedback is an important approach to improve the performance of image retrieval systems 1. For largescale image database retrieval problem, labeled images are always rare compared with unlabeled images. It has become a hot topic how to utiliz

19、e the large amounts of unlabeled images to augment the performance of the learning algorithms when only a small set of labeled images is available. Tong and Chang proposed an active learning paradigm, named SVMAct ive 2. They think that the samples lying beside the boundary are the most informative.

20、 Therefore, in each round of relevance feedback, the images that are closest to the support vector boundary are returned to users for labeling. Usually, the feature representation of an image is a combination of diverse features, such as color, texture, shape, etc. For a specified example, the contr

21、ibution of different features is significantly different. On the other hand, the importanceof the same feature is also different for differentsamples. For example, color is often more prominent than shape for a landscape image. However, the retrieval results are the averaging effort of all features,

22、 which ignores the distinct properties of individual feature. Some works have suggested that multi-view learning can do much better thanthe single-view learning in eliminating the hypotheses consistent with the training set 3,4.In this paper, we consider color and texture as two sufficient and uncor

23、related feature representations of an image. Inspired by SVMAct ive, we proposed a novel active learning method, Co-SVM. Firstly, SVM classifiers are separately learnt in different feature representations and then these classifiers are used to cooperatively select the most informative samples from t

24、he unlabeled data. Finally, the informativesamples are returned to users to ask for labeling.2. Support vector machinesBeing an effective binary classifier, Support Vector Machines (SVM) is particularly fit for the classification task in relevance feedback of image retrieval 5. With the labeled imag

25、es, SVM learns a boundary (i.e., hyper plane) separating the relevant images from the irrelevant images with maximum margin. The images on a side of boundary areconsidered as relevance, and on the other side are looked as irrelevance.Given a set of labeled images (x1, y1), . . . , (xn, yn), xi is th

26、e feature representation of one image, yi 1,+1 is the class label (1 denotes negative and +1 denotes positive). Training SVM classifier leads to the following quadratic optimization problem:S.t:where C is a constant and k is the kernel function. The boundary (hyper plane) isWhere are any support vec

27、tors satisfied: The classification function can be written as 3. Co-SVM3.1. Two-view schemeIt is natural and reasonable to assume that color features and texture features are two sufficient and uncorrelated views of an image. Assume that x = c1, . . . , ci, t1, . . . , tj is the feature representati

28、on of an image, where c1, . . . , ci and t1, . . . , tj are color attributes and texture attributes, respectively. For simplicity, we define the feature representation space V = VC VT , and c1, . . . , ci VC, t1, . . . , tj VT .In order to find relevant images as much as possible, like the general r

29、elevance feedback methods, SVM is used to learn a classifier h on these labeled samples with the combined view V at the first stage. The unlabeled set is classified into positive and negative by h. Then m positive images are returned to user to label. At the second stage, SVM is used to separately l

30、earn two classifiers hC and hT on the labeled samples only with color view VC and texture view VT , respectively. A set of unlabeled samples that disagree between the two classifiers is recommended to user to label, which named contention samples. That is, the contention samples are classified as po

31、sitive by hC (CP) while are classified as negative by hT (TN), or are classified as negative by hC (CN) while are classified as positiveby hT (TP). For each classifier, the distance between sample and the hyper plane (boundary) can be looked as the confidence degree. The larger the distance, the hig

32、her the confidence degree is. In order to ensure that users can label the most informative samples, the samples which are close to hyper plane in both views are recommended to user to label. 3.2. Multi-view schemeThe proposed algorithm in two-view case is easily extended to multi-view scheme. Assume

33、 that the feature representation of a color image is defined as V = V1 V2 Vk, k2, each Vi, i = 1, . . . , k corresponds to a different view of the color image. Then k SVM classifiers hi can be individually learnt on each view. All unlabeled data are classified as positive (+1) or negative (1) by k S

34、VM classifiers, respectively. Define the confidence degreeD(x) =_ki=1sign(hi(x)_.The confidence degree can reflect the consistency of all classifiers on a specified example. The higher the confidence degree, the more consistent the classification is. Inversely, lower degree indicates that the classi

35、fication is uncertain. The labeling on these uncertain samples will result in maximum improvement of performance. Therefore, the unlabeled samples whose confidence degrees are the lowest are considered as the contention samples.3.3. About SVM SVM (Support Vector machine, support vector machine) meth

36、od 4 is based on statistical learning theory and the theory of VC dimension based on structural risk minimization principle, according to the limited sample information in the model complexity and learning ability of the most sought between good compromise to obtain the best generalization ability.

37、The main idea of SVM is a hyperplane as the decision surface, making the positive examples and counterexamples of separation between the edges is maximized. For the two-dimensional linear separable case, so H to the two types of training samples is not wrong to separate classification line, H1, H2,

38、respectively from the classification of various types in the sample line and the recent classification of lines parallel to the line, they shall called the interval distance between categories. The so-called optimal separating line is to ask the correct classification of line not only be able to sep

39、arate the two, but also the largest classification interval. In high dimensional space, the optimal classification line has become the optimal classification surface.4. ExperimentsTo validate the effectiveness of the proposed algorithm in improvement of performance, we compare it with Tong & Changs

40、SVMAct ive and the traditional relevance feedback algorithm using SVM. Experiments are performed on a subset selected from the Corel image CDs. There are 50 categories in our subset. Each category contains 100 images, 5000 images in all. The categories have different semantic meanings, such as anima

41、l, building, landscape, etc. In our experiments, the main purpose is to verify if the learning mechanisms of Co-SVM are useful, so we only employed simple color and texture features to represent images. The color features include 125-dimensional color histogram vector and 6-dimensional color moment

42、vector in RGB. The texture features are extracted using 3-level discrete wavelet transformation (DWT). The mean and variance averaging on each of 10 subbands are arranged to a 20-dimensional texture feature vector. RBF kernel is adopted in SVM classifiers. The kernel width is learnt by cross-validat

43、ion approach.The first 10 images of each category, 500 images in total,are selected as query images to probe the retrieval performance. In each round, only the top 10 images are labeled and 10 least confident images selected from contention set are labeled. All accuracy in the following text is the

44、averaging accuracy of all test images. Figs. 2 and 3 are the accuracy vs. scope curve of the three algorithms after the third and fifth rounds of relevance feedback, respectively. From the comparison results we can see that the proposed algorithm (Co-SVM) is better than SVMAct ive (active SVM) and t

45、he traditional relevance feedback method (SVM). Furthermore, we investigate the accuracy of various algorithms within top 10 to top 100, and with five rounds of feedback. For limited space, we only picture the results of top 30 and top 50 in Figs.1and 5, respectively. The detailed results are summar

46、ized in Table 1. The results depicted in Table 1 show thatCo-SVM achieves the highest performance.5. Related worksCo-training 3 and co-testing 4 are two representative multi-view learning algorithms. Co-training algorithm adopts cooperative learning strategy and requires that the two views of data a

47、re compatible and redundant. We have attempted to augment the performance of both color and texture classifiers by combining co-training, but the results were worse. Considering the condition of co-training, it is not surprising to find that color attribute and texture attribute are not compatible b

48、ut uncorrelated for a color image. In contrast, co-testing requires that the views should be sufficient and uncorrelated which makes the classifiers more independent for classification.Tong and Chang firstly introduced active learning approach to relevance feedback of image retrieval, SVMAct ive 2.

49、They think that the samples lying beside the boundary can reduce the version space as fast as possible, i.e. eliminating the hypotheses. Therefore, in each round of relevance feedback, the images that are closest to the hyperplane are returned to users for labeling. SVMAct ive is optimal for minimiz

50、ing the version space in case of single view. The proposed algorithm can be regarded as an extension of SVMAct ive in multiple view case.6. ConclusionsIn this paper, we proposed a novel active learning algorithm for selective sampling in relevance feedback, Co-SVM. In order to improve the performanc

51、e, the relevancefeedback is divided into two stages. At the first stage, we rank the unlabeled images by their similarity to the query and let users to label the top images like the common relevance feedback algorithms. In order to reduce the labeling requirement, only a set of the most informative

52、samples are selected by Co-SVM to label at the second stage. The experimental results show that the Co-SVM achieves obvious improvement compared with SVMAct ive and the traditional relevance feedback algorithm without active learning.AcknowledgementsThe first author was supported under Nokia Postdoc

53、toral Fellowship.References1 Y. Rui, T.S. Huang, S.F. Chang, Image retrieval: current techniques,promising directions and open issues, J. Visual Commun. ImageRepresentation 10 (1999) 3962.2 S. Tong, E. Chang, Support vector machine active learning for imageretrieval, in: Proceedings of the Ninth ACM

54、 International Conferenceon Multimedia, 2001, pp. 107118.3 A. Blum, T. Mitchell, Combining labeled and unlabeled data withco-training, in: Proceedings of the 11th Annual Conference onComputational Learning Theory, 1998, pp. 92100.4 I. Muslea, S. Minton, C.A. Knoblock, Selective sampling withredundant views, in: Proceedings of the 17th National Conference onArtificial Intelligence, 2000, pp. 621626.5 V. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.

展開閱讀全文
溫馨提示:
1: 本站所有資源如無(wú)特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
2: 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
3.本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
5. 裝配圖網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

相關(guān)資源

更多
正為您匹配相似的精品文檔
關(guān)于我們 - 網(wǎng)站聲明 - 網(wǎng)站地圖 - 資源地圖 - 友情鏈接 - 網(wǎng)站客服 - 聯(lián)系我們

copyright@ 2023-2025  zhuangpeitu.com 裝配圖網(wǎng)版權(quán)所有   聯(lián)系電話:18123376007

備案號(hào):ICP2024067431號(hào)-1 川公網(wǎng)安備51140202000466號(hào)


本站為文檔C2C交易模式,即用戶上傳的文檔直接被用戶下載,本站只是中間服務(wù)平臺(tái),本站所有文檔下載所得的收益歸上傳人(含作者)所有。裝配圖網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)上載內(nèi)容本身不做任何修改或編輯。若文檔所含內(nèi)容侵犯了您的版權(quán)或隱私,請(qǐng)立即通知裝配圖網(wǎng),我們立即給予刪除!