Opened 5 years ago

Closed 5 years ago

#43 closed defect (fixed)

How to know the modality of a language indication?

Reported by: gunnar.hellstrom@… Owned by: draft-ietf-slim-negotiating-human-language@…
Priority: minor Milestone:
Component: negotiating-human-language Version:
Severity: - Keywords:
Cc:

Description

A review comment said that a simple way is required to decide if a language tag is a sign language or a written or spoken language.

We have not responded to that comment.

I know one application scanning the IANA language registry at startup for that purpose and scanning for the word "sign" in the tag description. But that might be seen as an inappropriate way to use IANA registers if it get used by every phone in the future.

What can we say about this review comment?

1) Do we need to add a modality indication parameter in the syntax?

2) Or shall we strictly limit audio to have spoken languages, video to have signed languages and text and webrtc data channels to have written languages?

3) Or shall we leave this problem to implementation?

The review comment on this topic was from Dale Worley and is found in section B of:

https://www.ietf.org/mail-archive/web/slim/current/msg00766.html

by the sentence about specifying a view of the speaker in video:

"I think this mechanism needs to be described more exactly, and in
particular, it should not depend on the UA understanding which
language tags are spoken language tags."

It is this part we have not handled: " it should not depend on the UA understanding which language tags are spoken language tags"
That is a general issue, not really linked to the issue of the view of a speaker in the video stream.

Change History (1)

comment:1 Changed 5 years ago by bernard_aboba@…

  • Resolution set to fixed
  • Status changed from new to closed

I believe that this issue has been resolved in -18 Section 5.4:
https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-18#page-7

Note: See TracTickets for help on using tickets.