Scoring and boost Classic List Threaded 4 messages Open this post in threaded view
|

Scoring and boost

 Hi,I'm new to ElasticSearch, and I have been playing with boosting. Here is my test: https://gist.github.com/1445264.I am a bit confused by the results. The boosting effects are those that I expect for a single word, but the two words search give scores much closer than I would expect. In particular, the phrase search gives the same score in all cases, which I don't understand (I expect the same scoring as in case 1: keyword, title, body).Should I largely increase the boosting, or should I do my mapping and/or queries differently to achieve this result?Thanks for any help.
Open this post in threaded view
|

Re: Scoring and boost

I believe this is a lucene thing more than anything. Try this URL:

This is the core of it:

 score(q,d)   =   coord(q,d)  ·  queryNorm(q)  · ∑ ( tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d) ) t in q
Lucene Practical Scoring Function

where

1.  tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. Note that tf(t in q) is assumed to be 1 and therefore it does not appear in this equation, However if a query contains twice the same term, there will be two term-queries with that same term and hence the computation would still be correct (although not very efficient). The default computation for tf(t in d) in `DefaultSimilarity` is:

 `tf(t in d)`   = frequency½

2.  idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. idf(t) appears for t in both the query and the document, hence it is squared in the equation. The default computation for idf(t) in <A href="http://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/search/DefaultSimilarity.html#idf(int, int)">`DefaultSimilarity` is:

<A href="http://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/search/DefaultSimilarity.html#idf(int, int)">`idf(t)`  =   1 + log (
 numDocs ––––––––– docFreq+1
)

3.  coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in <A href="http://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/search/Similarity.html#coord(int, int)">`coord(q,d)` by the Similarity in effect at search time.

4. queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation in `DefaultSimilarity` produces a Euclidean norm:

queryNorm(q)   =   `queryNorm(sumOfSquaredWeights)`   =
 1 –––––––––––––– sumOfSquaredWeights½

The sum of squared weights (of the query terms) is computed by the query `Weight` object. For example, a `boolean query` computes this value as:

 `sumOfSquaredWeights`   =   `q.getBoost()` 2  · ∑ ( idf(t)  ·  t.getBoost() ) 2 t in q

5.  t.getBoost() is a search time boost of term t in the query q as specified in the query text (see <A href="http://lucene.apache.org/java/3_0_2/queryparsersyntax.html#Boosting a Term">query syntax), or as set by application calls to `setBoost()`. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi `TermQuery` objects, and so the boost of a term in the query is accessible by calling the sub-query `getBoost()`.

6.  norm(t,d) encapsulates a few (indexing time) boost and length factors:
• Document boost - set by calling `doc.setBoost()` before adding the document to the index.
• Field boost - set by calling `field.setBoost()` before adding the field to a document.
• <A href="http://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/search/Similarity.html#lengthNorm(java.lang.String, int)">`lengthNorm(field)` - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:

 norm(t,d)   =   `doc.getBoost()`  ·  `lengthNorm(field)`  · ∏ `f.getBoost`() field f in d named as t

However the resulted norm value is `encoded` as a single byte before being stored. At search time, the norm byte value is read from the index `directory` and `decoded` back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

Compression of norm values to a single byte saves memory at search time, because once a field is referenced at search time, its norms - for all documents - are maintained in memory.

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

Last, note that search time is too late to modify this norm part of scoring, e.g. by using a different `Similarity` for search.

Open this post in threaded view
|

Re: Scoring and boost

Would you pls help with my situation?
I have documents with a 'sentence' field of type String. How can I boost the score of documents with shorter 'sentence' values?

On Thursday, December 8, 2011 4:01:57 PM UTC+7, Ali Loghmani wrote:
I believe this is a lucene thing more than anything. Try this URL:

This is the core of it:

 score(q,d)   =   coord(q,d)  ·  queryNorm(q)  · ∑ ( tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d) ) t in q
Lucene Practical Scoring Function

where

1.  tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. Note that tf(t in q) is assumed to be 1 and therefore it does not appear in this equation, However if a query contains twice the same term, there will be two term-queries with that same term and hence the computation would still be correct (although not very efficient). The default computation for tf(t in d) in `DefaultSimilarity` is:

 `tf(t in d)`   = frequency½

2.  idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. idf(t) appears for t in both the query and the document, hence it is squared in the equation. The default computation for idf(t) in `DefaultSimilarity` is:

`idf(t)`  =   1 + log (
 numDocs ––––––––– docFreq+1
)

3.  coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in `coord(q,d)` by the Similarity in effect at search time.

4. queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation in `DefaultSimilarity` produces a Euclidean norm:

queryNorm(q)   =   `queryNorm(sumOfSquaredWeights)`   =
 1 –––––––––––––– sumOfSquaredWeights½

The sum of squared weights (of the query terms) is computed by the query `Weight` object. For example, a `boolean query` computes this value as:

 `sumOfSquaredWeights`   =   `q.getBoost()` 2  · ∑ ( idf(t)  ·  t.getBoost() ) 2 t in q

5.  t.getBoost() is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to `setBoost()`. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi `TermQuery` objects, and so the boost of a term in the query is accessible by calling the sub-query `getBoost()`.

6.  norm(t,d) encapsulates a few (indexing time) boost and length factors:
• Document boost - set by calling `doc.setBoost()` before adding the document to the index.
• Field boost - set by calling `field.setBoost()` before adding the field to a document.
• `lengthNorm(field)` - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:

 norm(t,d)   =   `doc.getBoost()`  ·  `lengthNorm(field)`  · ∏ `f.getBoost`() field f in d named as t

However the resulted norm value is `encoded` as a single byte before being stored. At search time, the norm byte value is read from the index `directory` and `decoded` back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

Compression of norm values to a single byte saves memory at search time, because once a field is referenced at search time, its norms - for all documents - are maintained in memory.

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

Last, note that search time is too late to modify this norm part of scoring, e.g. by using a different `Similarity` for search.

Open this post in threaded view
|

Re: Scoring and boost

Stefan,

By the characteristics of the TD-IDF formula, shorter sentences should be boosted. Lucene (and therefore ElasticSearch) uses norms and they are enabled by default.

Here is a good explanation of norms and term frequencies in Lucene:

Cheers,

Ivan

On Sat, Jul 14, 2012 at 6:18 AM, Stefan Nguyen wrote:

Would you pls help with my situation?
I have documents with a 'sentence' field of type String. How can I boost the score of documents with shorter 'sentence' values?

On Thursday, December 8, 2011 4:01:57 PM UTC+7, Ali Loghmani wrote:
I believe this is a lucene thing more than anything. Try this URL:

This is the core of it:

 score(q,d)   =   coord(q,d)  ·  queryNorm(q)  · ∑ ( tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d) ) t in q
Lucene Practical Scoring Function

where

1.  tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. Note that tf(t in q) is assumed to be 1 and therefore it does not appear in this equation, However if a query contains twice the same term, there will be two term-queries with that same term and hence the computation would still be correct (although not very efficient). The default computation for tf(t in d) in `DefaultSimilarity` is:

 `tf(t in d)`   = frequency½

2.  idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. idf(t) appears for t in both the query and the document, hence it is squared in the equation. The default computation for idf(t) in `DefaultSimilarity` is:

`idf(t)`  =   1 + log (
 numDocs ––––––––– docFreq+1
)

3.  coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in `coord(q,d)` by the Similarity in effect at search time.

4. queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation in `DefaultSimilarity` produces a Euclidean norm:

queryNorm(q)   =   `queryNorm(sumOfSquaredWeights)`   =
 1 –––––––––––––– sumOfSquaredWeights½

The sum of squared weights (of the query terms) is computed by the query `Weight` object. For example, a `boolean query` computes this value as:

 `sumOfSquaredWeights`   =   `q.getBoost()` 2  · ∑ ( idf(t)  ·  t.getBoost() ) 2 t in q

5.  t.getBoost() is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to `setBoost()`. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi `TermQuery` objects, and so the boost of a term in the query is accessible by calling the sub-query `getBoost()`.

6.  norm(t,d) encapsulates a few (indexing time) boost and length factors:
• Document boost - set by calling `doc.setBoost()` before adding the document to the index.
• Field boost - set by calling `field.setBoost()` before adding the field to a document.
• `lengthNorm(field)` - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:

 norm(t,d)   =   `doc.getBoost()`  ·  `lengthNorm(field)`  · ∏ `f.getBoost`() field f in d named as t

However the resulted norm value is `encoded` as a single byte before being stored. At search time, the norm byte value is read from the index `directory` and `decoded` back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

Compression of norm values to a single byte saves memory at search time, because once a field is referenced at search time, its norms - for all documents - are maintained in memory.

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

Last, note that search time is too late to modify this norm part of scoring, e.g. by using a different `Similarity` for search.