wiki:TracQuery

Trac Ticket Queries

In addition to reports, Trac provides support for custom ticket queries, which can be used to display tickets that meet specified criteria.

To configure and execute a custom query, switch to the View Tickets module from the navigation bar, and select the Custom Query link.

Filters

When you first go to the query page, the default filter will display tickets relevant to you:

  • If logged in then all open tickets, it will display open tickets assigned to you.
  • If not logged in but you have specified a name or email address in the preferences, then it will display all open tickets where your email (or name if email not defined) is in the CC list.
  • If not logged in and no name/email is defined in the preferences, then all open issues are displayed.

Current filters can be removed by clicking the button to the left with the minus sign on the label. New filters are added from the dropdown lists at the bottom corners of the filters box; 'And' conditions on the left, 'Or' conditions on the right. Filters with either a text box or a dropdown menu of options can be added multiple times to perform an Or on the criteria.

You can use the fields just below the filters box to group the results based on a field, or display the full description for each ticket.

After you have edited your filters, click the Update button to refresh your results.

Clicking on one of the query results will take you to that ticket. You can navigate through the results by clicking the Next Ticket or Previous Ticket links just below the main menu bar, or click the Back to Query link to return to the query page.

You can safely edit any of the tickets and continue to navigate through the results using the Next/Previous/Back to Query links after saving your results. When you return to the query any tickets which were edited will be displayed with italicized text. If one of the tickets was edited such that it no longer matches the query criteria , the text will also be greyed. Lastly, if a new ticket matching the query criteria has been created, it will be shown in bold.

The query results can be refreshed and cleared of these status indicators by clicking the Update button again.

Saving Queries

Trac allows you to save the query as a named query accessible from the reports module. To save a query ensure that you have Updated the view and then click the Save query button displayed beneath the results. You can also save references to queries in Wiki content, as described below.

Note: one way to easily build queries like the ones below, you can build and test the queries in the Custom report module and when ready - click Save query. This will build the query string for you. All you need to do is remove the extra line breaks.

Note: you must have the REPORT_CREATE permission in order to save queries to the list of default reports. The Save query button will only appear if you are logged in as a user that has been granted this permission. If your account does not have permission to create reports, you can still use the methods below to save a query.

You may want to save some queries so that you can come back to them later. You can do this by making a link to the query from any Wiki page.

[query:status=new|assigned|reopened&version=1.0 Active tickets against 1.0]

Which is displayed as:

Active tickets against 1.0

This uses a very simple query language to specify the criteria, see Query Language.

Alternatively, you can copy the query string of a query and paste that into the Wiki link, including the leading ? character:

[query:?status=new&status=assigned&status=reopened&group=owner Assigned tickets by owner]

Which is displayed as:

Assigned tickets by owner

Customizing the table format

You can also customize the columns displayed in the table format (format=table) by using col=<field>. You can specify multiple fields and what order they are displayed in by placing pipes (|) between the columns:

[[TicketQuery(max=3,status=closed,order=id,desc=1,format=table,col=resolution|summary|owner|reporter)]]

This is displayed as:

Results (1 - 3 of 34)

1 2 3 4 5 6 7 8 9 10 11
Ticket Resolution Summary Owner Reporter
#34 fixed Speech Quality Aspects in emergency calls hoene@…
#33 fixed Impact of transmission delay hoene@…
#32 invalid Playout/Dejittering Buffer hoene@…
1 2 3 4 5 6 7 8 9 10 11

Full rows

In table format you can also have full rows by using rows=<field>:

[[TicketQuery(max=3,status=closed,order=id,desc=1,format=table,col=resolution|summary|owner|reporter,rows=description)]]

This is displayed as:

Results (1 - 3 of 34)

1 2 3 4 5 6 7 8 9 10 11
Ticket Resolution Summary Owner Reporter
#34 fixed Speech Quality Aspects in emergency calls hoene@…
Description

[Hoene]: At this point of time it is not clear to me what the service requirements of an emergency call are going to be. Which speech/audio quality requirements do the emergency agencies have?

Brian is suggesting a technical solution to become a requirement. But, don't we have to listen to the users, first? Only then, we can develop a technical solutions that might affect our work in the Codec WG.

The implications aren't a

big deal for the CODEC WG, just that you need to be able to use SDP to signal no VAD.

Are you sure that this is the only requirement? I think that there are other important things in case of emergency calls. Do they need audio quality? Do they need ultra-low-delay or is any transmission delay fine? What shall happen, if the transmission quality is bad? Push-to-talk?

No one thinks an

end user is going to go and change the configuration in their phone before making an emergency call.

No, he does know VAD and he does not care about. Some part of the system must take care of. However, the user does know that he wants to make a emergency call and the phone might have a button called "emergency call".

[Brian]: Generally, high fidelity is a good thing for emergency calls. This has to be balanced against how many codecs each PSAP implements, but at least in the evolving North American standards, which are currently believed to be the most advanced, it is recommended that PSAPs implement one or two wideband codecs. Graceful fallback in cases of congestion would be nice, but not a hard requirement.

Delay below 150ms is unlikely to be of much use. Sometimes, an emergency call can have audio+video and the delays must match (lip sync). Frame time really doesn't matter much independent of its effect on delay. I personally think delay over 150ms is not acceptable, but we've had this discussion and there are some who persist in believing you can get good quality with 250 ms. In an emergency call, stress is a big factor, and social skills are not nuanced. This means turn taking is an issue, and anything that gets in the way of an interruption is bad. All of my testing indicates that turn taking, especially argument, is impaired above around 150 ms.

All of the above doesn't really rise to hard requirements other than the soft requirement that delay doesn't impair turn taking. The ability to disable VAD is a hard requirement.

I believe that is it with respect to codec. See ietf-ecrit-phonebcp for the actual requirements.

[Gregor Jänin]: I have to come back on a comment chistian made : What about Push-to-talk?? I have found out that in Europe and Australia they are looking into "eurocae wg67", als solution to transfer PTT! It is already used in fight safety control and some of the PSAP equipment vendors doing both , flight and public safety! What is your position on that, we gonna neet it! Especial when we talk about autority to autority, or even just nextel.. [Brian]: PTT is not currently a requirement for citizen to authority. It is a requirement for authority to authority. It is not a requirement for authority to citizen.

I don’t think PTT has any effect on codec.

...

We do support text (and video) with ecrit standards. Of particular interest is the “real time text” codec work, but just using SIP MESSAGE works. There is also some work on sending ‘data only’, i.e. sensor alerts using the same mechanisms.

It has occurred to me that while in the normal case, we do want to disable VAD, it does help when the network is highly congested. Since what I asked for was that it should be negotiable, the PSAP could control when it was or wasn’t used.

The problem with PTT in general is that the emergency service isn’t in a “talk group”, and what you would be establishing is an on the fly two party talk group which has to be routed using the same mechanisms as the emergency calls are routed. That’s not impossible, but it is unusual.

#33 fixed Impact of transmission delay hoene@…
Description

[Koen]: For typical VoIP applications, Moore's law has lessened the pressure to reduce bitrates, delay and complexity, and has shifted the focus to fidelity instead.

[Benjamin]: I think this is a typo, and you mean "lessened the pressure to reduce bitrates and complexity, and has shifted the focus to fidelity and delay instead".

[Koen]: Not a typo: codecs have become more wasteful with delay, while delivering better fidelity. G.718 evolved out of AMR-WB and has more than twice the delay. Same for G.729.1 versus G.729. This is not by accident.

The main rationale for codec delay being less important today is that faster hardware has reduced end-to-end delay in every step along the way. As a result, a typical VoIP connection now operates at a flatter part of the "impairment-vs-delay" curve, meaning that reducing delay by N ms at a given fidelity gives a smaller improvement to end users today than it did some years ago. Therefore, the weight on minimizing delay in the "codec design problem" has gone down, and the optimum codec operating point has naturally shifted towards higher delay, in favor of fidelity.

I've mentioned before that average delay on Internet connections seems to be 40% to 50% lower now than just 5 years ago, which is just one contributor to lower end-to-end delay. That doesn't mean high-delay connections don't exist - they do, for instance over dial-up or 3G. But in those cases it's still better to use a moderate packet rate (and bitrate), to minimize congestion risk.

The confusion may come from the fact that the trade-off between fidelity and delay changes towards high quality levels: once fidelity saturates, delay gets priority. Even more so because such high fidelity enables new, delay-sensitive applications like distributed music performances. This is reflected in the ultra-low delay requirements in the requirements document.

To summarize, the case for using sub-20 ms frame sizes with medium-fidelity quality is now weaker than ever, because the relative importance of fidelity has gone up.

[Christian]: may I present some results of the ITU-T SG12 on the perceptual effects of delay? For many years, it was assumed that 150ms is the boundary for interactive voice conversations (see Nobuhiko Kitawaki, and Kenzo Itoh: Pure Delay Effects on Speech Quality in Telecommunications, IEEE J. on Selected Areas in Commun., Vol.9, No.4, pp.586-593, May 1991) Until 400ms quality is still acceptable (about toll quality). The ITU-T G.107 quality model reflects this opinion. However, in the recent years, new results have shown that the impact of delay on conversation quality is NOT as strong as assumed. At the ITU-T, numerous contributions have been made on this issue: Contribution of BT “Comparison of E-Model and subjective test data for pure-delay conditions” from 2007-01-08: Link http://www.itu.int/md/T05-SG12-C-0030/en The conversational tests were done in controlled environments with nine pairs of subjects. Two subjects had the common tasks of their set of sorting pictures in the same order. Other conditions: No echos, G.711, no frame loss [PICTURE at http://www.ietf.org/mail-archive/web/codec/current/msg01588.html] Legend: MOS-CQS are subjective conversational tests MOS-CQE is the E-Modell (G.107) MOS-LQO are result from PESQ. The delay is a one-way delay. Beside MOS values, they also studied the subjective rating of percentage difficultly (%D). Starting at about 150ms is goes up at reaches 35% at 900ms. After that it remains constant.

Also, LM Ericsson described very interesting results in “Investigation of the influence of pure delay, packet loss and audio-video synchronization for different conversation tasks” from 2007-09-24. http://www.itu.int/md/T05-SG12-C-0119/en For example: The done conversational tests similar to ITU-T P.805. The conversation lasted about 3 to 5 minutes. 11 pairs of experts were taken part.

[PICTURE at http://www.ietf.org/mail-archive/web/codec/current/msg01588.html] The tasks at 160ms were done about 50s faster than the same task at 600ms

And in the second tests about 60 naïve subjects and experts were taken part to solve a conversational task.

If they were asked for interactivity the ratings look worse.

Overall, it seems that the limit of 150ms is greatly overestimated. A much relaxed timing is allowed.

[Benjamin]:

(1) The results conflict with common sense. A round-trip delay of 800 ms makes normal conversation extremely irritating in practice. I'm not surprised these results don't show up in laboratory tests, because fast conversations with interjections and rapid responses typically require a social context not available in a lab test.

It's possible that the ITU regards "extremely irritating" as "acceptable", since effective conversation is still possible. In that case, I would say that the working group intends to enable applications with much better than "acceptable" quality.

(2) Tests may have been done in G.711 narrowband, which introduces its own intelligibility problems and reduces quality expectation. Higher fidelity makes latency more apparent. Similarly, the equipment used may have introduced quality impairments that make the delay merely one problem among many.

(3) I presume the tests were done with careful equipment setup to avoid echo. The perceived quality impact of echo at 200 ms one-way delay is enormous, as shown in

http://downloads.hindawi.com/journals/asp/2008/185248.pdf

Using an echo-canceller impairs quality significantly. Imperfect echo cancellation leaves some residual artifact, which is also irritating at long delays.

The tests (even in the paper above) were performed using a telephone handset and earpiece. High-quality telephony with a freestanding speaker instead of an earpiece demands especially low delay due to the difficulties with echo cancellation.

[Marshall]:

This depends a lot on what sort of discussion is at issue (and, also on the culture of the participants).

For example, in my experience telepresence sessions tend to be structured meetings and can typically tolerate even half second delays without too much disruption, while for a one-on-one conversation on the same equipment the same delay can be pretty objectionable.

Having said that, I myself also find the previously attached graphs a little odd, and want to see a written description of just what sort of experiments they describe.

[Brian]: I agree with this. I was in a group that did some research on this (unpublished, unfortunately) and we confirmed that there is a cliff, around 500 ms round trip, after which conversation is impaired. It is remarkably consistent, is more or less independent of culture (with one interesting exception), and is really a cliff: under it and further improvement is hard to notice, over it and conversation is impaired, and the difference between say 750 and 1500ms isn't all that significant.

Engineers who believe delay is a "less is better" quantity need to be educated that it is not. It is a threshold

[JM]: Considering that the network delay is not a constant, you no longer have an absolute cliff. So reducing the delay means you can increase the distance without falling off the cliff.

[Benjamin]: One test in that paper told trained subjects to "Take turns reading random numbers aloud as fast as possible", on a pair of handsets with narrowband uncompressed audio and no echo. Subjects were able to detect round-trip delays down to 90 ms. Conversational efficiency was impaired even with round-trip delay of 100 ms.

Let me emphasize again that these delays are round-trip, not one-way, there is no echo, and the task, while designed to expose latency, is probably less demanding than musical performance.

...

I accept Brian Rosen's claim that a slow conversation doesn't normally suffer greatly from round-trip latencies up to 500 ms, but under some circumstances much lower latencies are valuable. Let's make sure they're achievable for those who can use them.

[Raymond]: Other than potential echo issues, the biggest problem with a one-way delay longer than a few hundred ms is that such a long delay makes it very difficult to interrupt each other, resulting in the start-stop-start-stop cycles I previously talked about. Therefore, I agree with Ben that if the lab test did not have echoes and did not involve the test subjects trying to interrupt each other, then the test results may appear more benign than what one would experience in the real world.

Note that the top curve in the first figure below is for “listening-only tests”. Well, in that case there was no interaction/interruption at all, so if there was no echoes, either, then it is no wonder that the curve stayed essentially flat. I do wonder what made the curve go down at 1300 ms; I guess to understand this we need to know what the lab set up was for this test. Thus, I echo Marshall’s opinion that we need the original paper/contribution.

My personal experience with the delay impairment is much worse than the middle curve (MOS-CQS) would suggest and is close to the bottom curve (MOS-CQE). Back in early 1980s the phone calls I made from southern California to East Asia were carried through geosynchronous satellites with a one-way delay slightly more than 500 ms (see http://en.wikipedia.org/wiki/Geostationary_orbit). I absolutely hated it, because turn-taking was severely impaired and the only way to interrupt the person at the other side was to keep talking (rudely, I may say) until the other person finally stopped. Then, starting in late 1980s undersea cables were used to carry my traditional circuit-switched calls to the same person in East Asia, and all of a sudden the delay was much shorter and interrupting each other felt as easy as face-to-face conversation. It’s a night-and-day difference! Even in early 2000s when I used my cell phone to call my son’s cell phone in another cellular network, I could tell that there was a significant delay that noticeably impaired our turn-taking and our ability to interrupt each other, and I didn’t like it at all. Now you know why I advocate low-delay voice communications, have been working on low-delay speech coding for two decades, and have even published a book chapter on low-delay speech coding :o)

[Stephen]: From my own experience (not testing) I agree with Brian's claim that 500 ms round trip is acceptable for most conversation. It does depend on what you are doing, and there are certainly tasks where much lower delays are needed.

[Mike]: Agreed that achieving low enough latencies for sidetone perception should not be a goal of the wg, but we should be aiming if at all possible for better than 250 ms one-way delay in typical (and non-tandemed) deployments. The knee of the one-way delay impairment factor begins rising non-linearly somewhere between 150 and 250 ms.

[Raymond]: If you read the published technical papers on G.718 and G.729.1 carefully, I think you will find that the real reason for the increased delay is not because they needed a longer delay to achieve better fidelity for speech, but because they wanted to extend speech codecs to also get good performance when coding general audio (music, etc.). To get good music coding performance, most audio codecs use Modified Discrete Cosine Transform (MDCT) with at least a transform window size that is fairly large, so most of the audio codecs have longer coding delays than speech codecs.

To code music well, G.718 and G.729.1 developers naturally had to use long MDCT transform windows on top of the codec delay already in AMR- WB and G.729. Even so, the resulting longer delays of G.718 and G.729.1 are still not any longer than typical delays of audio codecs; in fact, they are probably somewhat shorter.

My point is that the increased delays of G.718 and G.729.1 are purely a result of changing from "speech-only" to "speech and music". It's not because the G.718 and G.729.1 developers knew the network delay was getting shorter so they could be more wasteful with delay. Furthermore, even after they changed the codecs to handle music as well as speech, they still chose to make their codec delays shorter than the delays of most audio codecs. Why? They wanted to make their codec delays as short as they could. In fact, they even made an effort to introduce a "low-delay mode" into both G.718 and G.729.1. That shows they were pretty concerned about the higher delays they needed to have in order to code music well.

By the way, G.718 does NOT have "more than twice the delay" of AMR-WB as you said. AMR-WB has a 20 ms frame size, 5 ms look-ahead, and 1.875 ms of filtering delay, for a total algorithmic buffering delay of 26.875 ms. The "normal mode" of G.718 has a buffering delay of 42.875 ms for 16 kHz wideband input/output. That's only 59.5% higher than AMR-WB. For Layers 1 and 2 coding of speech, the "low-delay mode" shaves 10 ms off to give a delay of 32.875 ms, or only 22.3% higher than AMR-WB.

When G.729.1 was first standardized in May 2006, there was already a low-delay mode for narrowband speech at 8 and 12 kb/s with a algorithmic buffering delay of 25 ms. Later in August 2007, the developers made an effort to add another low-delay mode for wideband at 14 kb/s that has a buffering delay of 28.94 ms. If they wanted to sacrifice delay to get higher fidelity as you suggested, then why would they bother to go back and add another low-delay mode for wideband?

In fact, only a few months ago in their G.729.1 paper in IEEE Communications Magazine, October 2009, Varga, Proust, and Taddei still emphasized in multiple instances the importance of achieving a low coding delay. I will quote two of the instances:

"The low-delay mode... was added to the first wideband layer at 14 kb/s of G.729.1 (August 2007). The motivation was to address applications such as VoIP in enterprise networks where low end-to-end delay is crucial" and

"Indeed, delay is an important performance parameter, and transmitting speech with low end-to-end delay is also required in several applications making use of wideband signals".

In summary, I do not see a clear trend where codec developers are becoming more wasteful with delay in order to get higher fidelity. If anything, in recent years I saw a trend of low-delay audio coding, such as low-delay AAC and the CELT codec, and I saw the effort by G.718 and G.729.1 developers to introduce low-delay modes.

In any case, I thought a few days ago a consensus was already reached in the WG email reflector that the IETF codec needs to have a low- delay mode with a 5 to 10 ms codec frame size so that it can handle delay-sensitive applications (that is 5 out of 6 applications listed in the charter and codec requirement document). Therefore, I think the discussion in your last email and my current email is mostly of academic interest only and doesn't and shouldn't affect how the IETF codec is to be designed.

[Mike]: Agreed that achieving low enough latencies for sidetone perception should not be a goal of the wg, but we should be aiming if at all possible for better than 250 ms one-way delay in typical (and non-tandemed) deployments. The knee of the one-way delay impairment factor begins rising non-linearly somewhere between 150 and 250 ms.

CONSENSUS: Impairments start somewhere between 150 and 250ms one-way delay.

#32 invalid Playout/Dejittering Buffer hoene@…
Description

This ticket summarized the discussions regarding dejittering buffer issues...

1 2 3 4 5 6 7 8 9 10 11

Query Language

query: TracLinks and the [[TicketQuery]] macro both use a mini “query language” for specifying query filters. Filters are separated by ampersands (&). Each filter consists of the ticket field name, an operator and one or more values. More than one value are separated by a pipe (|), meaning that the filter matches any of the values. To include a literal & or | in a value, escape the character with a backslash (\).

The available operators are:

= the field content exactly matches one of the values
~= the field content contains one or more of the values
^= the field content starts with one of the values
$= the field content ends with one of the values

All of these operators can also be negated:

!= the field content matches none of the values
!~= the field content does not contain any of the values
!^= the field content does not start with any of the values
!$= the field content does not end with any of the values

The date fields created and modified can be constrained by using the = operator and specifying a value containing two dates separated by two dots (..). Either end of the date range can be left empty, meaning that the corresponding end of the range is open. The date parser understands a few natural date specifications like "3 weeks ago", "last month" and "now", as well as Bugzilla-style date specifications like "1d", "2w", "3m" or "4y" for 1 day, 2 weeks, 3 months and 4 years, respectively. Spaces in date specifications can be omitted to avoid having to quote the query string.

created=2007-01-01..2008-01-01 query tickets created in 2007
created=lastmonth..thismonth query tickets created during the previous month
modified=1weekago.. query tickets that have been modified in the last week
modified=..30daysago query tickets that have been inactive for the last 30 days

See also: TracTickets, TracReports, TracGuide, TicketQuery

Last modified 6 years ago Last modified on 05/11/16 15:45:40