786515846483888
top of page

Grupo

Público·15 membros

Subtitle The Interpreter High Quality



subtitle(___,Name,Value) sets properties on the text object using one or more name-value pair arguments. Specify the properties after all other input arguments. For a list of properties, see Text Properties.




subtitle The Interpreter



subtitle(target,___) specifies the target object for the subtitle. The target object can be any type of axes, a tiled chart layout, or an array of objects. Specify the target object before all other input arguments.


If you add a title or subtitle to an axes object, then the font size property for the axes also affects the font size for the title and subtitle. The title and subtitle font sizes are the axes font size multiplied by a scale factor. The FontSize property of the axes contains the axes font size. The TitleFontSizeMultiplier property of the axes contains the scale factor. By default, the axes font size is 10 points and the scale factor is 1.1, so the title and subtitle each have a font size of 11 points.


Standard subtitles are designed for viewers who hear the audio but cannot understand it. However, there is a type of subtitle specifically designed to support hearing-impaired individuals: Subtitles for the Deaf and Hard of Hearing (SDHH). SDHH contains not only the spoken dialogue but information about background sounds and speaker changes, along with a translation of the script.


Whether you engage the interpreter in simultaneous or consecutive interpretation, the interpreter should understand the subject matter so providing them with information and supporting documents is essential so they can prepare well.


Because subtitles are designed for people who can hear the audio but don't understand it, they are mainly used in movies and TV series amongst those who like watching content in its original version but cannot understand it. Also, they are produced and synchronized before the video is released and most on-demand video streaming platforms include them.


When it comes to deciding on the technology, besides content type and user needs, setting and budget also play a key role. There are services that event managers can use to get access to interpreters and human-interpreted live captions in virtual and hybrid setups. Platforms such as Interprefy enable real-time simultaneous interpretation and live captioning on any event type and meeting empowering event managers to deliver content in a variety of languages. hbspt.cta._relativeUrls=true;hbspt.cta.load(6564114, '053e8349-fd04-4e6f-aaed-e9740d0b9219', "useNewLoader":"true","region":"na1");


Depending on the goals for your video project, you may choose to provide the content in multiple languages via a voice-over instead of subtitles. We work with only the best native speaking voice-over professionals and collaborate with clients to choose their preferred style of voice-over talent.


Before commencing his or her duties, an interpreter appointed under this subchapter shall take an oath in substantially the following form: "Do you [swear] [affirm] that you will make a true and impartial interpretation using your best skills and judgment in accordance with the standards and ethics of the interpreter profession and that you will abide by the Arkansas Code of Professional Responsibility for Interpreters in the Judiciary, [so help you God][under the penalty of perjury]?"


Another key difference between translation and interpretation: professional interpreters work bi-directionally. That is, they are called on to interpret from their non-native language into their native language, and vice versa.


When done well, interpreters make it easy to communicate with live audiences. Businesses looking to go global will most likely use interpretation services when hosting live events, participating in conferences, and in other situations where direct, real-time spoken communication with their customers, employees, or business partners is needed.


Because the subtitles will ultimately be synced to the original video and the text will appear and vanish on the screen as the characters are talking, the translation must be easy to read at a glance and not distract from the video.


The subtitles must match the speech patterns of the original speakers and the speed of the original video. As a result, subtitlers must adhere to strict character limits. Usually, the translated subtitles must fit on only two lines of text, with each line containing no more than 35-42 characters. Like interpretation, this requires linguists who can masterfully do paraphrasing what is being said on screen.


In recent years respeaking has become the preferred method for live intralingual subtitling; based on speaker-dependent speech recognition technology, it is used to subtitle live TV broadcasts and events for the deaf and hard-of-hearing. The interlingual variant of respeaking is beginning to emerge as a translation mode that can provide accessibility to all, across both linguistic and sensory barriers. There are striking similarities between interlingual respeaking and simultaneous interpreting in terms of process; however, the two modes differ greatly in terms of end product, i.e. a set of subtitles vs. an oral translation. This empirical study analysed simultaneous interpreting and interlingual respeaking (from English into Italian) in the same conference, to compare the semantic content conveyed to the audience via the interpreted speeches and the live subtitles. Results indicate greater semantic loss in the subtitles (especially through omissions), but little difference in the frequency of errors causing meaning distortion. Some suggestions for future research and training are provided in the conclusions.


As IRSP is a very recent development, there is not much research to determine its viability. This paper aims to contribute to the discussion by presenting a small-scale empirical study based on an MA thesis (Luppino 2016-17) that compared the target language (henceforth, TL) speeches produced by simultaneous interpreters with the subtitles produced by interlingual respeakers working in the same conference. A multimedia data archive was created; then, a smaller sub-corpus of 4 speeches was selected for the study. The focus was on assessing how much of the semantic content of the source language (henceforth, SL) speeches was conveyed to the audience via the interpreted speeches and the subtitles. A dedicated analysis grid was developed and applied to our data to shed some light into the challenges posed by the two modes and to inform future IRSP research and training. The paper begins with a brief overview of respeaking research, with a special focus on IRSP (2); it then presents the data and methodology in 3, the analysis in 4 and some conclusions in 5.


From the point of view of the end product, respeaking is studied as a form of (live) subtitling, with the related change in semiotic code (from spoken to written) and need for text reduction connected to the speed constraint (Romero-Fresco 2009, Van Waes et al. 2013, Sandrelli 2013). The main focus of the product-oriented studies has been the development of models to assess subtitle accuracy and the analysis of the specific challenges posed by different settings and text types (Eugeni 2009, Romero-Fresco 2011, Sandrelli 2013). The NER model (Romero-Fresco 2011) is the most widely used one to assess the accuracy of live subtitles produced via respeaking.[2] It distinguishes between (software-related) recognition errors and (human) edition errors, and a score is attributed to each error depending on its severity (minor, standard or serious). After testing the NER model on different TV genres, a score of 98 per cent has been suggested as the minimum accuracy threshold for usable intralingual subtitles (Romero-Fresco 2011). The model has been adopted by Ofcom, the UK broadcasting regulator, which commissioned four reports on the quality of live subtitling on British television (Ofcom 2015a, 2015b). Most of the available research on intralingual respeaking has been conducted on TV settings, while the Respeaking at Live Events project (Moores 2018, 2020) is looking at the feasibility of respeaking in museum tours, conferences, lectures and Q&A panels after cinema screenings and theatre shows. The aim is to identify the specific requirements of each setting and produce best practice guidelines to organise services efficiently.


Turning to product-oriented research on IRSP, a reliable method to assess the accuracy of interlingual live subtitles and quality standards for this translation mode must still be defined. Romero-Fresco and Pöchhacker (2017) developed the NTR model, which distinguishes between recognition errors and human errors.[3] Translation errors include both content-related (omissions, additions and substitutions) and form-related errors (grammatical correctness and style). The model acknowledges that some errors are more serious than others in terms of the effect they have on viewers, and distinguishes between minor, major and critical errors (-0.25, -0.50 or -1 point, respectively). Minor errors slightly alter the message but do not hamper comprehension; major errors introduce bigger changes, but the overall meaning of the text is preserved; critical errors result in grossly inaccurate and misleading information and affect comprehension significantly.


Dawson and Romero-Fresco (forthcoming) report on the results of a four-week pilot training course delivered online within the ILSA (Interlingual Live Subtitling for Access) project, the first IRSP course ever developed. 50 students with a training background in subtitling or interpreting participated in the course, which included 3 weekly sessions and was delivered entirely online. After analysing their performances in the final tests, the authors concluded that IRSP is indeed feasible, with over 40 per cent of subjects hitting or exceeding the 98 per cent NTR mark after this relatively short course.[5] On average, the student interpreters performed better than the subtitlers, but some of the latter also did well, so an interpreting background does not seem to be mandatory. 041b061a72


Informações

Bem-vindo ao grupo! Você pode se conectar com outros membros...
Página do grupo: Groups_SingleGroup
bottom of page