SaaS / Hosted Monthly Release Notes - June 2020 (10.3.005 - 10.3.008)

Analysis > Predictive Coding Templates: Select how date data is treated

We have added an option to Predictive Coding Templates that allows administrators to select how date type field data should be treated in the model.

To access the option, go to Analysis > Predictive Coding Templates and select a template. On the Fields page, click in the Date Value column, and then select a value for how date information should be treated in the template.

Predictive Coding Template - Fields page

The options are as follows:

  • Text: Date information is treated as a text string.
  • Day, month, and year: Date information is modeled without time. This option is the default.
  • Month: Date information is treated as a number with January = 1, February = 2, and so on.
  • Day: Date information is treated as a number.
  • Day of the week: Date information is treated as a number with Sunday = 1, Monday = 2, and so on.
  • Year: Date information is treated as a number.

Note: The application treats any date fields selected in templates created before this change as Text. If the template is used in any models, the date value cannot be changed. The choice of which Date Value option to use depends on the case data and what aspect of the date information has an impact on the model. For example, documents with dates clustered around the middle of January 2005 may be meaningful. Select the Day, month, and year value to model date information in that way. In a different example, which day of the week a date falls on (Monday, Tuesday, and so on) has some relevance. Use the Day of the week value to model date information in that way.

Manage Documents > Ingestions: Upgrade to the Nuix Engine 8.4.5

Ingestions now uses the Nuix Workstation 8.4.5 processing engine. The engine upgrade resolves the out-of-memory failures when processing some forensic images.

Manage Documents > Productions: Apply headers

Administrators now have the option to apply headers to production images. In previous releases, only footer settings were available.

You can access header options on the Endorsements page for a production. Click in any of the boxes for Left, Middle, or Right header to launch the variable builder.

Endorsements page

You can select any combination of headers and footers. Note that the width, height, and font size are the same for both headers and footers. As with footers, the Header height setting allows you to add space to the top of the image when endorsing headers such that the header text does not overlap with image text.

Manage Documents > Renumbering: Add option to number by document

When renumbering documents, administrators can now choose to increment page numbers by document or page. In the previous release, you could increment numbering by page only.

  • On the Document ID page, in the Format list, administrators can now select either Prefix, Box, Folder, Page - increment by page or Prefix, Box, Folder, Page - increment by document.
  • On the Endorsement page, for each header and footer option, administrators can now select Document ID plus page number.

Portal Management > Reports: New Max data (GB) column in Hosted Details report

The Hosted Details report includes a new column named Max data (GB), shown in the following figure.

This column displays the maximum active hosted data size for a case within a specified date range and captures the following data: sum value of base documents, production renditions, databases, the Elasticsearch index, the content index, and Predict data. This column also includes a calendar icon. Hover over the calendar icon to display the date of the maximum value.

Reports - Hosted Details page

If you download a report, it will now include the following two columns and values:

  • Max data (GB): The maximum active hosted data size for a case.
  • Max data (date): The date the maximum active hosted data size was captured.

Portal Management > Cases and Servers: Delete case record from the portal database for deleted cases

After a case is deleted, as a system administrator, you can delete the case record for the deleted case from the portal database. You can also then create a new case using the same case name.

Use the following procedure to delete the case record for a deleted case.

  1. On the Portal Management > Cases and Servers > Deleted Cases page, select the check box next to a case.
  2. On the toolbar, click Delete record.
  3. In the Delete record dialog box, shown in the following figure, select the Delete case record and all metrics check box.
  4. Note: Once you select this check box and click OK in the following step, the case record and all metrics are permanently deleted from the portal database.

  5. Click OK.
  6. Delete record dialog box

After you delete the record, the deleted case no longer appears on the Deleted Cases page or the Portal Management > Reports page.

Portal Management > Settings > Log Options: Multiple S3 bucket entries supported for the Telemetry archive configuration

The Telemetry archive configuration field on the Portal Management > Settings > Log Options page now has a new “S3Buckets” setting that supports multiple entries for the key, secret, region, and bucket values.

When telemetry is configured to store the logs in the database, and the configuration string includes multiple S3 buckets, the telemetry data is pushed to all S3 buckets.

The following example shows how to format the JSON configuration string with multiple S3 buckets.

{
“Checkpoint”: 0,
“CheckpointRPF”: 0,
“S3Buckets”: [
{ “Key”: “******”, “Secret”: “******”, “Region”: “us-east-1”, “Bucket”: “s3-bucket-1” }
,
{ “Key”: “******”, “Secret”: “******”, “Region”: “us-east-1”, “Bucket”: “s3-bucket-2” }
],
“CleanupMaxDays”: 3,
“ScheduleId”: 22,
“IntervalInMinutes”: 30,
“NRecentRecordsToReturn”: 10000
}

Connect API Explorer: maxActiveHostedSize and dateOfMaxActiveHostedSize case statistics

There are two new case statistics available through the Connect API Explorer that will return the maximum value of the aggregateActiveHostedSize and the date of that value within a specified date range.

Note: The aggregateActiveHostedSize statistic is the sum of sizeOfBaseDocumentsHostedDetails, sizeOfRenditionsHostedDetails, aggregateDatabases, sizeOfElasticSearchIndex, dtIndexSize, and sizeOfFolderData_Predict.

  • maxActiveHostedSize: Returns the maximum value of aggregateActiveHostedSize within a specified date range. This value calculates from the first minute of the startDate (12:00:00am) to the last minute of the endDate (11:59:59pm) in Coordinated Universal Time (UTC).
    • When only providing the endDate for the date range, the returned value is the highest value of the aggregateActiveHostedSize calculating from the beginning of the case to the last minute of the specified endDate.
    • When there is no startDate or endDate provided, the returned value is the highest of the aggregateActiveHostedSize over the entire life of the case, from the beginning of the case through the current day.
  • dateOfMaxActiveHostedSize: Returns the date of the maxActiveHostedSize within a specified date range.
    • When only providing the endDate for the date range, the returned value is the date of the maxActiveHostedSize calculating from the beginning of the case to the last minute of the specified endDate.
    • When there is no startDate or endDate provided, the returned value is the date of the maxActiveHostedSize over the entire life of the case, from the beginning of the case through the current day.

Sample query:

query {
  cases {
    name
    statistics(startDate: “2020-04-01”, endDate: “2020-04-30”) {
      maxActiveHostedSize
      dateOfMaxActiveHostedSize
    }
  }
}

SaaS / Hosted Monthly Release Notes - May 2020 (10.3.001 - 10.3.004)

Renumbering: Change the Document IDs and leveling of documents

You can now use the Renumbering tool in Nuix Discover to change the Document ID format of documents. Once you have specified a Document ID format, the application images the selected documents and converts them to a PDF format, then applies the specified numbering rules, relevels the documents to match the new Document IDs, applies endorsements to the PDF image, and replaces the PDF image with the endorsed version. You can view the endorsed PDF images in the Image viewer in Nuix Discover.

Renumber documents

You can renumber imported documents using the Renumbering option on the Tools menu, shown in the following figure.

Caution: Do not run multiple simultaneous Renumbering jobs with the same Document ID prefix in a case.

Select one or more documents to enable this tool.

Renumbering document selection list

On the Exclusions page, shown in the following figure, you can determine the following information:

  • Types of files to include and exclude in your renumbering.
  • If the native file should still appear in the Image viewer in Nuix Discover.
  • How the application will handle documents that fail to image to PDF.
Renumbering > Exclusions page

On the Slipsheets page, shown in the following figure, you can select which files to insert slipsheets for, and use the variable builder to determine what text appears on the slipsheets.

Renumbering > Slipsheets page

On the Document ID page, shown in the following figure, you can determine the following information:

  • The format for how the files will be renumbered. You can select a format that includes a prefix, box, folder, page, and delimiter, or you can select a format that includes only a prefix and padding.
  • If document families must stay together in a folder after renumbering.
  • If levels should be updated to correspond to the new numbering. This option is only available if you select the Prefix, Box, Folder, or Page format.
Renumbering Document ID page

On the Endorsement page, shown in the following figure, you can determine what information goes in the header and footer of renamed documents.

Renumbering Endorsement page

Translate: Propagate translated text to duplicate documents

When you submit a document for translation, the translated text is propagated across all duplicates of the document, so that you do not have to translate each duplicate document individually.

Note: Documents with branded redactions are not translated.

Also, the following applies to the translated duplicate documents:

  • The Translation Language system field is coded with the same target language as the translated document.
  • The Translation Status system field is coded with the same value as the translated document.

Renumbering: Enable the renumbering feature

On the Security > Features page, administrators can enable the renumbering feature using the Processing - Renumbering option.

Renumbering: Enable renumbering system fields

On the Case Setup > System Fields page, administrators can make renumbering-related system fields available to users. If the renumbering system fields are enabled, users can search for the fields and display the fields as columns in the List pane.

The following renumbering system fields are available:

  • Renumbering Status
  • Renumbering Previous Document ID
  • Renumbering ID

Renumbering: View renumbering job properties

On the Manage Documents > Renumbering page, administrators can view the properties and progress of renumbering jobs. Click a renumbering job in the list to view the properties or errors for the job.

Note: Administrators can allow Group Members and Group Leaders to access the Manage Documents > Renumbering page. On the Security > Administration page, in the Leaders or Members columns, set the Manage Documents – Renumbering Management function to Allow, and then click Save.

Exports: Option to include blank text files

For custom export types (base or renditions), a new option is available in the Export window to include a blank .txt file for all documents in the export that are missing a .txt file. For base documents, the option is available on the File types page in the Settings window (available when you click the Settings button, or gear).

Option for base document export:

Export window Endorseable Image files options

For rendition documents, the option is available on the File types page.

Option for document export:

Export (Renditions) File types page

If you select this option, along with the option to export content files, the application exports a blank .txt file for documents without an existing .txt file or associated extracted text. For base documents, the application names the .txt file according to the document ID. For rendition documents, the application names the .txt file according to the production document label for renditions. The blank .txt files are referenced in any load files that have a field for the text file name.

Note: When exporting base documents, if the application excludes any .txt files from an export because of annotations, a blank .txt file is not exported for those documents. The option to omit text files if a document is annotated is on the Annotations page in the Settings window (available when you click the Settings button, or gear).

To help administrators easily identify documents for which blank .txt files were exported, the following message appears on the Warnings page of the export job: “A blank content file (.txt) was exported because no content/.txt file was found for a document.”

Imaging: Add time zone setting for email file conversion

Administrators can now select a time zone for rendering native email files into images. The Time zone option is available in the Manage Documents > Imaging-Automated > Settings window on the Email and Website page. Administrators can select Use ingestions default or a specific time zone. If the administrator selects Use ingestions default, the application uses the time zone set in the default settings for Ingestions.

Imports: Prevent the creation of a new field with the same name as a system field

In the Import settings window, on the Field Map page, if a user creates a new field with the same name as an existing system field but of a different type, the application does not allow the user to continue. The field is outlined in red, and the following message appears: "New field cannot match an existing system field's name."

Processing > Index Status: Only document entities are included in the index status counts

On the Portal Management > Processing > Index Status page, shown in the following figure, only document entity items are included in the indexing counts in the Documents columns (Total, Indexed, Waiting, Excluded, Failed). Non-document entity items are not captured.

Portal Management > Processing > Index Status page

Organizations: Schedule daily case metrics jobs

System administrators can now schedule daily case metrics jobs for organizations and all cases in those organizations.

Note: This feature is not available to portal administrators.

Use the following procedure to schedule a daily case metrics job for an organization.

  1. On the Portal Management > Organizations page, on the toolbar, click the Case metrics button.

    The Case metrics settings dialog box appears.

  2. In the Case metrics settings dialog box, shown in the following figure, in the Time list, select a time.

    Note: The time appears in the user’s local time.

  3. Select one or more organizations.

    Note: To select all organizations, select the blue checkmark, shown in the following figure.

  4. Click Save.
    Case metrics settings dialog box

    The jobs are scheduled to run daily, at the time you selected. The newly scheduled jobs are added to all existing cases for the selected organization or organizations. For cases that are added to an organization after the job has been scheduled, the settings for the organization apply.

    Note: These settings do not override previously scheduled jobs.

Use the following procedure to cancel a daily case metrics job.

  1. Open the Case metrics settings dialog box.
  2. Clear the check box for the selected organization or organizations.
  3. Click Save.

After you schedule a daily case metrics job, in the table on the Portal Management > Organizations page, an icon in the second column indicates if a daily case metrics job is scheduled for an organization, as shown in the following figure.

Note: This column is visible only to system administrators.

Portal Management > Organizations page

Once the daily case metrics job is complete, the values in the following columns are updated on the Portal Management > Reports > Hosted Details page:

  • Base documents (GB)
  • Production renditions (GB)
  • Databases (GB)
  • Elasticsearch index (GB)
  • Content index (GB)
  • Predict (GB)

The values in the following columns are not updated as part of a daily case metrics job. Rather, the values in these columns reflect the values from the last Gather case metrics job that was run:

  • Orphan (GB)
  • File transfer data (GB)
  • Archive data (GB)
  • Missing (GB)

To update the values for these columns, you must run a full Gather case metrics job on the Portal Management > Processing > Jobs page.

Connect API Explorer: Assign users to case groups using the userGroupAssign mutation

The Connect API Explorer userGroupAssign mutation allows you to easily assign users to case groups for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a groupId, or multiple userIds to a groupId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to a group and the user has an existing assignment to that group, the notChangedCount will increase by the appropriate number.

Required fields:

  • userId
  • caseId
  • groupId

Sample mutation:

mutation {
  userGroupAssign(input: [
    { userId: [7,9,10,11], groupId: 13, caseId: 8 },
    { userId: 8, groupId: 13, caseId: 4 }
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Update organization settings using the organizationUpdate mutation

The Connect API Explorer organizationUpdate mutation gives system and portal administrators the ability to update organization settings to help manage the organizations within the application.

Required fields:

  • organizationId: Integer, identifies the organization in the portal.

Optional fields:

  • name: String, organization name in the portal.
  • accountNumber: String, account number of the organization being modified.
  • caseId: Integer, identifies the default template case for the organization in the portal.

Sample mutation:

mutation {
  organizationUpdate(input: [
    {organizationId: 4, name: “ABC Corp“, accountNumber: 87597117},
    {organizationId: 6, name: “XYZ Corp“, caseId: 10}
  ])
    {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Unassign users from case groups using the userGroupUnassign mutation

The Connect API Explorer userGroupUnassign mutation allows the ability to unassign a user from a case group to more thoroughly manage case access. Portal Administrators, who are assigned to a case, can unassign Portal Users and other Portal Administrators from the groups in that case.

Required fields:

  • userId: Integer, identifies the user in the portal.
  • caseId: Integer, identifies the case in the portal.
  • groupId: Integer, identifies the user group in the case.

Sample mutation:

mutation {
  userGroupUnassign(input: [
    {userId: [7,9,10,11], groupId: 13, caseId: 8},
    {userId: 8, groupId: 13, caseId: 4}
  ])
  {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

SaaS / Hosted Monthly Release Notes - April 2020 (10.2.009 - 10.3.000)

Introducing the Memo Editor pane

Nuix Discover now has a new Memo Editor pane, as shown in the following figure. This new pane is available for both documents and entities. It contains the existing Memo Editor formatting capabilities, as well as a new and quicker way of creating and removing links, and a new feature for downloading memos in Hypertext Markup Language (HTML) format.

Memo Editor pane

Note: The Memo Editor pane does not replace the existing editor capability within memo fields.

The following list provides an overview of the features available in the Memo Editor pane:

  • Memo field selection: To switch between the active memo fields, click the drop-down list on the toolbar and select a field. You can select the Comments, Timelines Description, [Meta] Chat HTML, and Document Description fields, as well as many others.

    Note: From the memo fields drop-down list, the Memo Editor pane allows you to access only the one-to-one memo fields.

  • Hyperlinks: Creating hyperlinks to documents, binders, transcripts, or other data takes fewer mouse clicks using the Memo Editor pane. Hyperlinking is also available for both documents and entities. In previous releases, you could not hyperlink to entities.
    • When you enter text to create a link and either double-click or highlight the text, an inline menu appears that contains the Document link, Object link, Transcript link, Web link and Remove link options, as shown in the following figure.
      Memo Editor inline menu

      After selecting a link option, a dialog box appears that allows you to search for and to select the link data.

      Note: The inline menu for linking replaces the Link toolbar button in the existing memo capability.

    • To view link contents, each link contains a tooltip that appears when you point to an existing link, as shown in the following figure.
      Link contents tooltip
    • Linked content opens when you hold down the Ctrl key on the keyboard and click the mouse.
    • To remove a link, double-click the link and select Remove link.

      Note: You cannot edit the text in existing links. You must first remove the link, then correct spelling errors or other mistakes made in the link text.

  • Auto-search linking: The Mentions feature allows you to do a quick search for document links.
    • When you type the hash (#) sign, followed by six or more characters of a Document ID, an inline list appears with matching search results, as shown in the following figure.
      Memo Editor inline list for entering document links

      Select an item from this list to automatically create a link to the selected document and insert the link into the memo.

      Tip: You cannot create a link back to an active document.

  • Downloading: The Download button allows you to export memos to HTML, as shown in the following figure.
    Memo Editor export sameple of the memo

    The top portion of the HTML file shows general information such as the case, user, date downloaded, and other information. Memo text follows.

    • If the memo contains links, you can view the link contents in the same manner as in the Memo Editor pane. However, because transcript links are embedded data and do not have an associated URL, they do not open from the downloaded HTML file. They open only from the Memo Editor pane.

      Note: If you have not previously logged into Nuix Discover, the login page appears before opening the linked document.

Search page option added to the Case Home menu

You can now access the Search page from the Case Home menu, as shown in the following figure.

Case Home Search page

Default start page for a group

Your administrator can now define the Nuix Discover page that appears for your group after you log in to the application. For example, if your administrator sets the start page for your group to be the Documents page, that page appears after you log in, and Workspace A appears.

View pane: MHT documents converted to PDF in the Native view in the View pane

The application now converts .mht documents to a PDF format when you access them in the Native view in the View pane, as shown in the following figure.

Native view in the View pane

Security > Features: Memo Editor pane configuration

To make the Memo Editor pane available to users, on the Case Home > Security > Features page, an administrator must set the Document – Memo editor feature to Allow for a group. By default, this feature is set to Deny.

Security > Groups: Set the default start page for a group

You can now set a default start page for a group. One of the benefits of this new feature is that you can, for example, route users directly to the Documents page so that they can start reviewing documents. Workspace A appears by default on the Documents page.

Use the following procedure to set the start page for an existing group.

  1. On the Security > Groups page, in the Name column, click the link for a group.
  2. On the Properties page, in the Start page list, shown in the following figure, select one of the following start pages: Documents, Search, Transcripts, Production Pages, Security, Case Setup, Manage Document, Review Setup, Analysis.

    Note: The Case Home page is the default start page.

    Properties page Start page pick list options
  3. Click Save.

    The next time a member of the group logs in to the application, the designated start page appears. For example, if you set the Documents page as the start page, the Documents page appears by default.

Use the following procedure to change the start page for an existing group.

  1. On the Security > Groups page, on the toolbar, click Add.
  2. In the Create group dialog box, shown in the following figure, do the following:
    • In the Name box, provide a name.
    • In the Start page list, select a page.
      Create group dialog box
  3. Click Save.

Portal Management > User Administration: Require SAML users to re-enter credentials after logging out

System administrators can now require users who use a Security Assertion Markup Language (SAML) provider for authentication to re-enter their credentials after logging out of Nuix Discover.

To add this requirement, go to the Portal Home > User Administration > Identity Provider Settings page and click on the name of the configuration. On the Properties page, in the Configuration section, enter the following line:

“saml_force_reauth”: “true”

Portal Management > Processing: Download a log for a Supervisor

You can now download a log for a supervisor. The log includes error and info messages.

To download a log to a .csv format, on the Logs page for a Supervisor, click the Download logs button, shown in the following figure.

Download logs button

Portal Management > Settings: Text extraction: Update batching logic in text extraction job

In previous versions, the application processed text extraction jobs in batches using the number of files per batch that was specified in the Extract text job batch size case setting. (To access this setting, go the Portal Management > Cases and Servers > Cases page and click on the name of a case.)

To efficiently accommodate larger files, portal administrators can now set batch thresholds by file size using the Extract text job max batch file size portal setting, shown in the following figure.

To access this setting, go to the Portal Management > Settings > Portal Options page. The application determines the text extraction job batch size using whichever is smaller in file size: the number of files specified in the case setting or the maximum file size per batch specified in the portal setting.

Portal Management - Settings - Portal Options page showing information tooltip

Import API: Delete files from the S3 bucket upon completion of an import job

If the import job setting is to copy files from S3, once the files are copied, the application deletes the files from the S3 bucket. The application deletes the files for only those import jobs that completed successfully. The application does not delete files in failed import jobs.

SaaS / Hosted Monthly Release Notes - March 2020 (10.2.005 - 10.2.008)

Translate: New and updated source languages

The Translate feature now includes additional source language options, for example, Irish and Punjabi, when translating with Microsoft.

Some of the source language options for Google have been renamed. For example, Portuguese has been renamed to Portuguese (Portugal, Brazil).

These new or updated source language options are available in the Translate workspace pane and the Tools > Translate dialog box.

Coding History for fields updated by import jobs

The Coding History feature now captures audit records for field values that are updated by import jobs for existing document records.

The Coding History pane will include the following information:

  • The updated field value.
  • The user who created the import job as well as the date and time of the import job.
  • The previously coded value that was changed.
  • The user who applied the coding as well as the date and time of the previous coding.

Note: Your administrator must grant you read access to these fields, so that the fields appear in the Coding History pane.

Imports: Delete data from S3 bucket after completing import jobs

If files in an import job are copied from S3, the application deletes the files that were in the S3 bucket once the import job is successfully completed.

Productions: New Quality Control check for annotations that are not applied to the production

An Annotations exist that are not selected to be applied quality control check has been added to the Quality Control page for productions, as shown in the following figure. This check is enabled when at least one production rule other than Custom placeholder is selected on the Production rules page.

The Annotations exist that are not selected to be applied check identifies documents that have annotations applied to them that are not applied in the production.

If the application identifies any affected documents, a message that indicates the number of documents appears in the Result column on the Quality Control page for the production. Click the message to view the affected documents on the Documents page.

Documents page

Organizations: Set default file repositories

System administrators can now set default file repositories for an organization on the organization’s Properties page, as shown in the following figure.

Properties page

Note: The lists do not populate by default. The options in the lists include the file repositories that appear on the File Repositories page for an organization.

The options in this list include:

  • Image: Image or Index repositories
  • Index file: Image or Index repositories
  • File transfer: Image or Index repositories
  • Archive: Archive repositories
  • External: External repositories

The following three new columns now appear on the File Repositories page for an organization, as shown in the following figure.

File Repositories page
  • Default repository for:
    • If a file repository is the default repository, the values for indexes or images appear in this column.
    • Note: If a file repository is not linked to an organization, the default repository value does not appear on the Properties page for the organization.

  • Archive: If the file repository is the default file repository, a dot appears in the Archive column.
  • External: If the file repository is an external file repository, a dot appears in the External column.

Organizations: Set default servers

System administrators can now set default servers for an organization on the Properties page, as shown in the following figure.

Note: The lists do not populate by default. The options in these lists include the servers that appear on the Servers page for an organization.

Servers page
  • Database server: Database servers that you have permission to access.
  • Analysis server: Analysis servers that you have permission to access.

A new Default column appears on the Servers page for an organization, as shown in the following figure.

If a server is a default server, a dot appears in the Default column.

Note: If no servers are linked to the organization, this information does not appear on the Properties page for an organization.

Properties page Defaule column

Processing > Supervisors: Logs page for RPF supervisors

A new Logs page is available in the navigation pane on the supervisor Properties page.

To access this page, from the Portal Home page, go to Portal Management > Processing > Supervisors and select a supervisor in the list. The Logs page displays log information about the supervisor, which can help you identify error messages that may not otherwise appear in the interface.

Connect API Explorer: Query assignment data for report generation

The Connect API Explorer allows you to gather assignment data to generate reports that can show process workflows, phases, and user assignments.

The following lists the available fields for an assignment object query:

  • id
  • status: Object that extracts the following values:
    • Unassigned
    • Active
    • Suspended
    • Cleared
    • Deleted
    • Revoked
  • workflow: Object to extract the following field data:
    • description
    • id
    • name
    • phases
  • phases: Object to extract the following field data:
    • documentsPerAssignment
    • id
    • locked
    • name
    • parentId
    • parentPhaseName
    • quickCode
    • validationCriteriaName
  • lot: Object to extract the following field data:
    • id
    • name
  • name
  • user
  • assignedDate
  • clearedDate
  • createdDate
  • clear
  • total

Sample query:

query {
  cases (id: 5) {
    reviewSetup {
      workflows (id: 7) {
        phases (id: 10) {
          id
        }
      }
      assignments (id: 8) {
        id
      }
    }
  }
}

Connect API Explorer: userUpdate mutation for administration tasks

The Connect API Explorer userUpdate mutation allows administrators to perform updates to multiple user accounts simultaneously. When building this mutation, you must include the userId field to identify the user accounts.

Optional fields:

  • firstName
  • lastName
  • email
  • companyId
  • identityProviderId
  • portalCategory
  • disabled
  • requirePasswordChange: Previously named forceReset
  • licenses
  • password
  • addToActiveDirectory
  • forceResetChallengeQuestions

Important: When passing a field value that is blank, the mutation will remove the field. For example, the mutation will remove the disabled field if you enter disabled: “”. When entering new values for either the firstName or lastName, the mutation updates the entire name.

Sample mutation:

mutation {
  userUpdate(input: [
    {userId: 200, firstName: “Fred”, lastName: “Doo”},
    {userId: 1, firstName: “Velma”},
    {userId: 1, lastName: “Doo”}
  ]) {
    users {
      id
      fullName
    }
  }
}

Connect API Explorer: Clone cases using caseClone mutation

The caseClone mutation allows you to quickly create new cases without having to use the Nuix Discover UI. The following describes the mutation acceptance criteria.

Required fields:

  • caseName
  • organizationId: Used to identify an organization’s default template used for cloning.

Optional fields:

  • sourceCaseId: Data based on a user’s organization. If the sourceCaseId is missing and there is a selected default template, the mutation uses the organization’s default template case. If the sourceCaseId is missing and there is no default template selected, the application returns the following message: A sourceCaseId must be included in this mutation when an organization does not have a default template case.
  • Description
  • scheduleMetricsJob = true (default): If true, schedule is set to Monthly on day 31 at 11:00 PM.

The following lists the non-configurable fields that inherit the organization’s default or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following lists examples of some of the available result fields for use in the caseClone mutation:

  • processingStatus: Object that extracts the following case processing status:
    • Failure
    • Pending
    • Queued
    • Succeeded
    • SucceededWithWarnings
  • processingType: Object that extracts the following case processing type:
    • Clone
    • Connect
    • Create
    • Decommission
    • DeleteDecommissionCase
    • Edit
    • Recommission

Note: This mutation does not support the process of setting the case metrics schedule to (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

Sample mutation query with defaults:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”
  }) {
    case {
      id
    }
  }
}

Sample mutation query with options:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”,
    description: “This is my cloned case”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

Connect API Explorer: Remove assigned users from cases using the userCaseUnassign mutation

The Connect API Explorer userCaseUnassign mutation allows you to remove assigned users from cases for easy management of case access. This mutation allows you to remove multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one removal format. All other fields can only remove in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • caseId

Sample mutation:

mutation {
  userCaseUnassign(input: [
    {userId: [7,9,10,15], caseId: 120},
    {userId: 11, caseId: 121},
    {userId: 8, caseId: 120}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Assign users to organizations using the userOrganizationAssign mutation

The Connect API Explorer userOrganizationAssign mutation allows you to assign users to organizations to help manage user assignments. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to an organizationId, or multiple ids to an organizationId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • organizationId

Sample mutation:

mutation {
  userOrganizationAssign(input: [
    {userId: [7,9,10,15], organizationId: 4},
    {userId: 7, organizationId: 10},
    {userId: 8, organizationId: 4}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Sample response:

{
  data: {
    userOrganizationAssign { totalCount: 6, successCount: 4, errorCount: 1, notChangedCount: 1 },
  },
  errors: [{ message: “Failed to assign the following users to organization 4: 8 }]
}

Connect API Explorer: Assign users to cases using the userCaseAssign mutation

The Connect API Explorer userCaseAssign mutation allows you to easily assign users to cases for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

New assignments automatically set the Access Restrictions to None as the default. Currently, the mutation does not have the ability to change this setting to another option. You must modify these settings manually through the UI.

When assigning a user to a case, if the user has an existing assignment to that case, leaving the caseGroupId field blank will not change the existing caseGroupId data for that user. If a user was previously assigned to a group in a case, and that user is removed from that case, when they are re-added to the case without specifying a group, they will be placed back into the group to which they previously belonged.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Note: Portal administrators will not have the ability to assign a user to a case that is outside their own organization.

Required fields:

  • userId
  • caseId
  • caseUserCategory

Optional fields:

  • caseGroupId

Sample mutation:

mutation {
  userCaseAssign(input: [
    {userId: [7,9,10,15], caseId: 120, caseUserCategory: Administrator, caseGroupId: 34},
    {userId: [8], caseId: 120, caseUserCategory: GroupMember, caseGroupId: 34}
]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Query users and their groups within cases

The Connect API Explorer allows you to query information on users and their groups within cases to help manage users and groups across review platforms. You can filter and sort the group data by name, id or userCount for NumericComparison. You can also separate the query results by page by using the standard_scroll_parameter (for example, scroll: \{start: 1, limit: 100}).

Note: To return the users of a specific group, add the user’s node under groups.

The following lists the available fields for querying user and group data:

  • groups: Object to extract the following field data:
    • id
    • name
    • userCount
    • timelineDate
    • quickCode
    • startPage
    • users

Sample query:

query cases {
  cases(id:5){
    name
    groups (id: 17 name: “group name” sort: [{ field: Name, dir: Asc }]) {
      id
      name
      userCount
      users {
        id
        name
      }
    }
  }
}

Connect API Explorer: Cross organization cloning using caseClone mutation

The mutation caseClone now allows the cloning of organizations without using the UI Extensions. The following is the acceptance criteria when using this process.

Required Fields:

  • caseName: Required data.
  • organizationId: Required data.
  • souceCaseId: Optional data with defaults based on user’s organization.
    • When not included, the mutation will use the organization’s default case template.
    • When not included and there is no default case template, the mutation uses the portal default case template.
    • When not included and there is no default case template or a portal case template, the application returns the following message: A sourceCaseId must be included in this mutation when the portal and organization do not have a default template case.
  • description: Optional data.
  • scheduleMetricsJob = true (default): Optional data. If true, schedule is set to Monthly on day 31 at 11:00 PM.
    • The mutation does not support setting the case metrics schedule as (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

The following are non-configurable fields and inherit the organization defaults or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following is an example of how to use these defaults and options.

Sample mutation with defaults:

mutation clone {
  caseClone(input: {
    sourceCaseId: 1,
    caseName: “My new cloned case”
  }) {
    case {
      id
    }
  }
}

Sample mutation with options:

mutation clone {
  caseClone(input: {
    organizationId: 11,
    sourceCaseId: 12,
    caseName: “My new cloned case”,
    description: “This case is described”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

SaaS / Hosted Monthly Release Notes - February 2020 (10.2.001 - 10.2.004)

Exports: New image formatting options

When exporting images in a custom export, you now have the option to convert images to searchable PDFs. This option is available on the Image settings page of the Export window.

Export - Image settings page showing Image format for PDFs

In the Image format list, if you select Convert to searchable PDFs, the application converts any non-PDF endorsable image files into searchable PDFs. For existing PDF image files, the application embeds text in the PDF file.

Note: If you select an option for image formatting that converts an image type, only the exported image file is affected. No files on the Nuix Discover fileshare are altered.

In the Image format list, if you select either Convert to searchable PDFs or Embed OCR text in existing PDFs, additional options are available. These options include PDF resolution, Performance, Auto-rotate, Despeckle, Deskew, and Languages. These options existed in previous releases for embedding OCR text in existing PDFs. However, the list of language options has been expanded to match the list of language options that is available in the OCR tool on the Documents page. On the Image settings page, you can click the Settings button (or gear) and select languages in the Settings window. The default language is English.

If you select either the Convert to searchable PDFs or Embed OCR text in existing PDFs options, you also have the option to select the Unless annotations or footers are applied, do not run OCR on PDFs if the documents are already coded as searchable check box. This check box is selected by default. When selected, for any existing PDF files, the application checks the Document OCR Status field. If that field is set to Completed – Embedded text in the PDF or Completed with warnings – Embedded text in the PDF and no annotations or footers are applied on any page of the document, then the application does not attempt to make that PDF file searchable.

Note: The application updates the Document OCR Status field for base or rendition documents if they are made searchable using the OCR tool on the Documents page. The application also updates this field through the production print process on rendition documents, if the option to embed text in existing PDFs is selected. If you make PDFs searchable using the OCR tool or the production print process, the language options may not be the same as the options selected during export.

Export - Image settings page Recognized language options

For efficiency, if the Unless annotations or footers are applied, do not run OCR on PDFs if the documents are already coded as searchable check box is not selected, the application attempts to make searchable only those pages that need to be.

  • The application attempts to make each page searchable that has annotations or footers.
  • If no annotations or footers exist on a page, the application checks for any text on the page. If text exists, the application uses the original page. Otherwise, the application attempts to make the page searchable.
  • Note: Language selections for exports may be different than the languages selected when making the original page searchable.

Productions: New PDF Settings page

We have added a new settings page for productions named PDF settings. This page contains settings that previously appeared on the Endorsements settings page when the Enable PDF annotations option was set for a case.

Note: When the Enable PDF annotations option is not set for the case at the time that a production is created, the PDF Settings page does not appear for that production.

Language options have been expanded on the new PDF Settings page. When embedding OCR text in PDF images during the production print process, you can select from the same list of languages to use for text recognition that appears in the OCR tool on the Documents page. You can also select more than one language.

PDF Settings page

If the Embed OCR text in existing PDF images option is selected on the page, the application updates the Document OCR Status field (and if needed, the Document OCR Error Details field) for the rendition document to reflect the OCR status of the PDF image of the rendition.

Connect API Explorer: Query extensions in the API

There is a new query in the Nuix Discover Connect API Explorer for retrieving a list of extensions.

This query retrieves the following extension data:

  • Id: Integer.
  • Name: String.
  • Location: Enumerator.
  • Configuration: String.
  • Description: String.
  • URL: String.

Sample query:

{
  extensions {
    id
    name
    location
    configuration
    url
    description
    createdDate
    createdByUser {
      id
      fullName
    }
  }
}

SaaS / Hosted Monthly Release Notes - March 2020 (10.2.005 - 10.2.008)

Translate: New and updated source languages

The Translate feature now includes additional source language options, for example, Irish and Punjabi, when translating with Microsoft.

Some of the source language options for Google have been renamed. For example, Portuguese has been renamed to Portuguese (Portugal, Brazil).

These new or updated source language options are available in the Translate workspace pane and the Tools > Translate dialog box.

Coding History for fields updated by import jobs

The Coding History feature now captures audit records for field values that are updated by import jobs for existing document records.

The Coding History pane will include the following information:

  • The updated field value.
  • The user who created the import job as well as the date and time of the import job.
  • The previously coded value that was changed.
  • The user who applied the coding as well as the date and time of the previous coding.

Note: Your administrator must grant you read access to these fields, so that the fields appear in the Coding History pane.

Imports: Delete data from S3 bucket after completing import jobs

If files in an import job are copied from S3, the application deletes the files that were in the S3 bucket once the import job is successfully completed.

Productions: New Quality Control check for annotations that are not applied to the production

An Annotations exist that are not selected to be applied quality control check has been added to the Quality Control page for productions, as shown in the following figure. This check is enabled when at least one production rule other than Custom placeholder is selected on the Production rules page.

The Annotations exist that are not selected to be applied check identifies documents that have annotations applied to them that are not applied in the production.

If the application identifies any affected documents, a message that indicates the number of documents appears in the Result column on the Quality Control page for the production. Click the message to view the affected documents on the Documents page.

Documents page

Organizations: Set default file repositories

System administrators can now set default file repositories for an organization on the organization’s Properties page, as shown in the following figure.

Properties page

Note: The lists do not populate by default. The options in the lists include the file repositories that appear on the File Repositories page for an organization.

The options in this list include:

  • Image: Image or Index repositories
  • Index file: Image or Index repositories
  • File transfer: Image or Index repositories
  • Archive: Archive repositories
  • External: External repositories

The following three new columns now appear on the File Repositories page for an organization, as shown in the following figure.

File Repositories page
  • Default repository for:
    • If a file repository is the default repository, the values for indexes or images appear in this column.
    • Note: If a file repository is not linked to an organization, the default repository value does not appear on the Properties page for the organization.

  • Archive: If the file repository is the default file repository, a dot appears in the Archive column.
  • External: If the file repository is an external file repository, a dot appears in the External column.

Organizations: Set default servers

System administrators can now set default servers for an organization on the Properties page, as shown in the following figure.

Note: The lists do not populate by default. The options in these lists include the servers that appear on the Servers page for an organization.

Servers page
  • Database server: Database servers that you have permission to access.
  • Analysis server: Analysis servers that you have permission to access.

A new Default column appears on the Servers page for an organization, as shown in the following figure.

If a server is a default server, a dot appears in the Default column.

Note: If no servers are linked to the organization, this information does not appear on the Properties page for an organization.

Properties page Defaule column

Processing > Supervisors: Logs page for RPF supervisors

A new Logs page is available in the navigation pane on the supervisor Properties page.

To access this page, from the Portal Home page, go to Portal Management > Processing > Supervisors and select a supervisor in the list. The Logs page displays log information about the supervisor, which can help you identify error messages that may not otherwise appear in the interface.

Connect API Explorer: Query assignment data for report generation

The Connect API Explorer allows you to gather assignment data to generate reports that can show process workflows, phases, and user assignments.

The following lists the available fields for an assignment object query:

  • id
  • status: Object that extracts the following values:
    • Unassigned
    • Active
    • Suspended
    • Cleared
    • Deleted
    • Revoked
  • workflow: Object to extract the following field data:
    • description
    • id
    • name
    • phases
  • phases: Object to extract the following field data:
    • documentsPerAssignment
    • id
    • locked
    • name
    • parentId
    • parentPhaseName
    • quickCode
    • validationCriteriaName
  • lot: Object to extract the following field data:
    • id
    • name
  • name
  • user
  • assignedDate
  • clearedDate
  • createdDate
  • clear
  • total

Sample query:

query {
  cases (id: 5) {
    reviewSetup {
      workflows (id: 7) {
        phases (id: 10) {
          id
        }
      }
      assignments (id: 8) {
        id
      }
    }
  }
}

Connect API Explorer: userUpdate mutation for administration tasks

The Connect API Explorer userUpdate mutation allows administrators to perform updates to multiple user accounts simultaneously. When building this mutation, you must include the userId field to identify the user accounts.

Optional fields:

  • firstName
  • lastName
  • email
  • companyId
  • identityProviderId
  • portalCategory
  • disabled
  • requirePasswordChange: Previously named forceReset
  • licenses
  • password
  • addToActiveDirectory
  • forceResetChallengeQuestions

Important: When passing a field value that is blank, the mutation will remove the field. For example, the mutation will remove the disabled field if you enter disabled: “”. When entering new values for either the firstName or lastName, the mutation updates the entire name.

Sample mutation:

mutation {
  userUpdate(input: [
    {userId: 200, firstName: “Fred”, lastName: “Doo”},
    {userId: 1, firstName: “Velma”},
    {userId: 1, lastName: “Doo”}
  ]) {
    users {
      id
      fullName
    }
  }
}

Connect API Explorer: Clone cases using caseClone mutation

The caseClone mutation allows you to quickly create new cases without having to use the Nuix Discover UI. The following describes the mutation acceptance criteria.

Required fields:

  • caseName
  • organizationId: Used to identify an organization’s default template used for cloning.

Optional fields:

  • sourceCaseId: Data based on a user’s organization. If the sourceCaseId is missing and there is a selected default template, the mutation uses the organization’s default template case. If the sourceCaseId is missing and there is no default template selected, the application returns the following message: A sourceCaseId must be included in this mutation when an organization does not have a default template case.
  • Description
  • scheduleMetricsJob = true (default): If true, schedule is set to Monthly on day 31 at 11:00 PM.

The following lists the non-configurable fields that inherit the organization’s default or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following lists examples of some of the available result fields for use in the caseClone mutation:

  • processingStatus: Object that extracts the following case processing status:
    • Failure
    • Pending
    • Queued
    • Succeeded
    • SucceededWithWarnings
  • processingType: Object that extracts the following case processing type:
    • Clone
    • Connect
    • Create
    • Decommission
    • DeleteDecommissionCase
    • Edit
    • Recommission

Note: This mutation does not support the process of setting the case metrics schedule to (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

Sample mutation query with defaults:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”
  }) {
    case {
      id
    }
  }
}

Sample mutation query with options:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”,
    description: “This is my cloned case”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

Connect API Explorer: Remove assigned users from cases using the userCaseUnassign mutation

The Connect API Explorer userCaseUnassign mutation allows you to remove assigned users from cases for easy management of case access. This mutation allows you to remove multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one removal format. All other fields can only remove in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • caseId

Sample mutation:

mutation {
  userCaseUnassign(input: [
    {userId: [7,9,10,15], caseId: 120},
    {userId: 11, caseId: 121},
    {userId: 8, caseId: 120}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Assign users to organizations using the userOrganizationAssign mutation

The Connect API Explorer userOrganizationAssign mutation allows you to assign users to organizations to help manage user assignments. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to an organizationId, or multiple ids to an organizationId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • organizationId

Sample mutation:

mutation {
  userOrganizationAssign(input: [
    {userId: [7,9,10,15], organizationId: 4},
    {userId: 7, organizationId: 10},
    {userId: 8, organizationId: 4}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Sample response:

{
  data: {
    userOrganizationAssign { totalCount: 6, successCount: 4, errorCount: 1, notChangedCount: 1 },
  },
  errors: [{ message: “Failed to assign the following users to organization 4: 8 }]
}

Connect API Explorer: Assign users to cases using the userCaseAssign mutation

The Connect API Explorer userCaseAssign mutation allows you to easily assign users to cases for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

New assignments automatically set the Access Restrictions to None as the default. Currently, the mutation does not have the ability to change this setting to another option. You must modify these settings manually through the UI.

When assigning a user to a case, if the user has an existing assignment to that case, leaving the caseGroupId field blank will not change the existing caseGroupId data for that user. If a user was previously assigned to a group in a case, and that user is removed from that case, when they are re-added to the case without specifying a group, they will be placed back into the group to which they previously belonged.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Note: Portal administrators will not have the ability to assign a user to a case that is outside their own organization.

Required fields:

  • userId
  • caseId
  • caseUserCategory

Optional fields:

  • caseGroupId

Sample mutation:

mutation {
  userCaseAssign(input: [
    {userId: [7,9,10,15], caseId: 120, caseUserCategory: Administrator, caseGroupId: 34},
    {userId: [8], caseId: 120, caseUserCategory: GroupMember, caseGroupId: 34}
]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Query users and their groups within cases

The Connect API Explorer allows you to query information on users and their groups within cases to help manage users and groups across review platforms. You can filter and sort the group data by name, id or userCount for NumericComparison. You can also separate the query results by page by using the standard_scroll_parameter (for example, scroll: \{start: 1, limit: 100}).

Note: To return the users of a specific group, add the user’s node under groups.

The following lists the available fields for querying user and group data:

  • groups: Object to extract the following field data:
    • id
    • name
    • userCount
    • timelineDate
    • quickCode
    • startPage
    • users

Sample query:

query cases {
  cases(id:5){
    name
    groups (id: 17 name: “group name” sort: [{ field: Name, dir: Asc }]) {
      id
      name
      userCount
      users {
        id
        name
      }
    }
  }
}

Connect API Explorer: Cross organization cloning using caseClone mutation

The mutation caseClone now allows the cloning of organizations without using the UI Extensions. The following is the acceptance criteria when using this process.

Required Fields:

  • caseName: Required data.
  • organizationId: Required data.
  • souceCaseId: Optional data with defaults based on user’s organization.
    • When not included, the mutation will use the organization’s default case template.
    • When not included and there is no default case template, the mutation uses the portal default case template.
    • When not included and there is no default case template or a portal case template, the application returns the following message: A sourceCaseId must be included in this mutation when the portal and organization do not have a default template case.
  • description: Optional data.
  • scheduleMetricsJob = true (default): Optional data. If true, schedule is set to Monthly on day 31 at 11:00 PM.
    • The mutation does not support setting the case metrics schedule as (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

The following are non-configurable fields and inherit the organization defaults or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following is an example of how to use these defaults and options.

Sample mutation with defaults:

mutation clone {
  caseClone(input: {
    sourceCaseId: 1,
    caseName: “My new cloned case”
  }) {
    case {
      id
    }
  }
}

Sample mutation with options:

mutation clone {
  caseClone(input: {
    organizationId: 11,
    sourceCaseId: 12,
    caseName: “My new cloned case”,
    description: “This case is described”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

SaaS / Hosted Monthly Release Notes - January 2020 (10.1.009 - 10.2.000)

Imports: Run indexing and enrichment using an import job

The Imports feature now allows you to request an indexing and enrichment job after an import job completes. On the Case Home > Manage Documents > Imports page, the Import Details page contains an option to Run indexing and enrichment, as shown in the following figure.

Import Details page

Selecting this option will run an indexing and enrichment job immediately after an import job completes. After adding a new import job, you can verify the selection of this option by clicking on the Import ID for that job and looking under the Import Details section of the Properties page, as shown in the following figure. The Run Indexing and Enrichment property indicates Yes if selected, or No if not selected.

Images and Natives Properties page

Ingestions: Add new system fields for ingestions

We have added the following three system fields to the Ingestions feature:

  • [Meta] Message Class: The message class MAPI property for email files. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] PDF Properties: Extracted properties specific to PDF files. Most files will have multiple properties. Each value in this field has the name of the property followed by the value for that property. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] Transport Message Headers: The message header for email files. By default, this field is unchecked on the Customize Fields page in the Advanced Settings window for ingestions.

Ingestions: NIST list updated - September 2019

Ingestions now uses an updated version of this list, released in September 2019. For more information, go to https://www.nist.gov/itl/ssd/software-quality-group/national-software-reference-library-nsrl.

Ingestions: Improvements to functionality and performance

Ingestions now uses the Nuix Workstation 8.2 processing engine. As a result, improvements to Ingestions include the following.

  • Handling of OneNote files is improved.
    • More content and attachments are extracted from OneNote data.
  • Support has been added for HEIC/HEIF file formats.
  • CAD drawing attachments are no longer treated as immaterial.
  • General improvements have been made to processing EnCase L01 files.

For a full list of features, see the Nuix Workstation 8.2 documentation.

Ingestions: Add error message information for corrupt documents

When the application encounters an ingestions error because of a corrupt document, information about that error appears in the [RT] Ingestion Detail field.

Load File Templates: Add new fields to the Variable builder for Load file templates

We have added two new expressions as options for load file template field values: Attach Count and Attach Filenames. These options are available for both general and production load file templates.

  • The Attach Count expression returns the number of immediate attachments associated with a parent document. If there are no immediate attachments, no value will be returned in the field.
  • The Attach Filenames expression lists the file names for immediate attachments associated with a parent document. The file name values are from the [Meta] File Name field. If there are no immediate attachments, no value will be returned in the field.

Processing > Jobs: Gather case metrics job captures total file size of base documents for non-document entity items

When you run a Gather case metrics job, in addition to capturing the file size of image, native, and content files associated with base documents, the application now also captures the total file size of the image, native, and content files associated with non-document entity items. This information appears in the Base documents (GB) column on the Portal Management > Reports > Hosted Details page.

Connect API Explorer: GraphQL and GraphQL Parser version upgrade

Connect API Explorer now contains the latest upgraded version of GraphQL (v2.4.0) and GraphQL Parser (v4.1.2). These upgrades require a few minor changes to your existing API queries and codes that are declaring Date variables.

In any existing API queries, the Date variable needs to change from Date to DateTime. The following figure is an example of an existing query declaring a Date variable before the upgrade.

Connect API Explorer API page showing Date variable

This next figure shows the needed change for the upgraded version of GraphQL.

Connect API Explorer API page showing DateTime variable

Connect API Explorer: API token enhancements

Newly created API authorization tokens no longer require separate API keys and will never expire. On the User Administration > API Access page, the API key label now shows the following message: The API key is not required for new authorization tokens.

The API authorization changes are backward compatible to accept existing authorization tokens, which will expire after three years.

To get a new key for an existing user, on the User Administration > API Access page, clear the Authorize this user to use the Connect API check box. Then select this option again to reactivate their authorization.

Connect API Explorer: New userAdd mutation

The new mutation userAdd allows the addition of new user accounts using the API. The following lists the accepted input data for this mutation.

  • firstName: Required data.
  • lastName: Required data.
  • username: Required data.
  • password: Required data.
  • email.
  • licenses: Default is Yes.
  • forceReset: Default is Yes.
  • portalCategory: Required and follows the same rules as in the user interface (UI) of what the user passing in the mutation can assign.
  • organizationID: Follows the same rules as in the UI of what the user passing in the mutation can assign.
  • companyID.
  • addtoActiveDirectory: Required and default is Yes.

The following is an example of how to use this mutation.

Sample Mutation:

mutation newuser {
  userAdd(input: {firstName: "new", lastName: "user", userName: "newuser", password: "Qwerty12345", email: "newuser@user.com", forceReset: false, portalCategory: PortalAdministrator, licenses: 1, addToActiveDirectory: true}) {
    users {
      id
      organizations {
        name
        id
        accountNumber
      }
      identityProvider
      userName
      fullName
      companyName
    }
  }
}

Connect API Explorer: New userDelete mutation

The new mutation userDelete allows the deletion of user accounts using the API so that you can integrate your user management application with Nuix Discover. The following lists the accepted input data for this mutation.

  • If all users exist, executing the userDelete mutation with single or multiple userid values will delete all specified users.
  • If some users do not exist, executing the userDelete mutation with single or multiple userid values will delete the specified valid users. In return, the user id values as null.
  • If no users exist, executing the userDelete mutation with single or multiple userid values will return, the user id values as null.

Fields:

  • userID: An integer that identifies the user in the portal.

The following is an example of how to use this mutation.

Sample Mutation:

mutation userDelete {
  userDelete(input: {userId: [231]}) {
    users {
      id
    }
  }
}

Connect API Explorer: Access and download API documentation

There are two new buttons available on the Connect API Explorer page, as shown in the following figure.

API Download and Open Docs buttons

The Open docs button accesses additional API documentation that contains more in-depth guidance on creating and handling queries and mutations. When you click the Open docs button, the Connect API Documentation tab appears containing the API documentation, as shown in the following figure. On the left are active links that access individual topics. Clicking these links will scroll the page up or down to the selected topic.

API Documentation

Note: The top-right corner of Connect API Documentation tab shows your specific URL location of the documentation and the current version of the document.

To download the documentation, click Download docs. This downloads the documentation as a Hypertext Markup Language (HTML) page for viewing in any browser window.

Import API: Run indexing and enrichment using createImportJob mutation

The createImportJob mutation now contains a parameter for running an indexing and enrichment job after an import job completes.

  • Name: runIndexing
  • Type: Boolean
  • Required: No
  • Default: false

The following is an example of how to use this parameter.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:{
      name:"My Import Job",
      description:"Import job description",
      level:"Imports/Custodian A/0001",
      docsPerLevel:1000,
      updateGroupCoding:true,
      runIndexing:true
    }
  )
  {
    rdxJobId
  }
}

Note: If this parameter is set to true, an indexing and enrichment process will run after the import job.

Import API: Run deduplication in import job

The createImportJob mutation now allows the option to suppress documents from the import job as duplicates. When the runDeduplication parameter is set to true, the job will use the deduplication settings associated with Ingestions processing as follows:

  • Use the default setting for Case or Custodian. If there is no default setting, use Case.
  • Use the default setting for Only use the top parent documents to identify duplicates. If there is no default setting, use False.
  • Do not retain suppressed files regardless of the setting.

The following are some additional considerations that will take place during processing:

  • The Imports feature codes all imported documents with a Yes in the Exclude from Ingestions Deduplication field. Coding of this field will not take place if selecting deduplicate and the setting is Case or Custodian.
  • The files within suppressed documents will not transfer.
  • If suppressing a document that contains an existing document ID in main_suppressed, the application returns the following message: Document <doc ID> was identified as a duplicate to be suppressed, but it was not suppressed because a document with the same Document ID has already been suppressed in this case.

In the createImportJob mutation, add one or more of the following parameters under options:

  • Name: runDeduplication
  • Type: boolean
  • Required: No
  • Default: False

Note: Select runDeduplication to run deduplication on the documents within this import, and to suppress duplicates. This process will use the deduplication settings for Ingestions.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      runDeduplication:True
    }
  )
  {
    rdxJobId
  }
}

On the Properties page for an import job, found on the Case Home > Manage Documents > Imports page, there is a new row under Statistics that reports on the number of suppressed documents, as shown in the following figure. This new row will only appear when using the deduplication option. If no duplicates are found, the value will appear as zero.

Import Job Statistices data

Import API: Assign sequential document IDs in an import job

The createImportJob mutation now contains parameters for assigning sequential document ID values for documents in the job.

  • Name: documentIdFormat
  • Valid values: Sequential or Existing
  • Required: No
  • Default: Existing

Note: Use a value of Sequential to have the application reassign document ID values for the documents within this import. Assignment of document IDs uses the provided prefix beginning with the next available document ID number matching that prefix and incrementing by 1 for each document.

  • Name: documentIdPrefix
  • Type: String
  • Required: No

Note: This is static text that appears at the beginning of each document ID only when using Sequential for the documentIdFormat option. If you do not provide this option, the application will use the document ID prefix setting from the Ingestions default settings.

When the documentIdFormat option is Sequential, the job generates a new document ID for all documents within the job. The generated ID will consist of a prefix from documentIdPrefix and a number value padded to nine digits beginning with the next available number in the case with the same prefix.

Document source and attachment relationships generate using the references in parentId based on the provided document ID values. If using sequential renumbering, document source and attachment relationships will generate only based on the parentId references within this job. Documents will not attach to prior existing documents.

If the document contains only one page, the page label will match the document ID. For documents containing multiple pages, the page labels update as DocID-00001, DocID-00002, DocID-00003, consecutively to the last page.

For files that are in pages, the page file name will match the existing page label such as DocID-00001.tif, DocID-00002.tif, and so on. For files not in pages, the file is named after the document ID, like DocID.xls.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      documentIdFormat:Sequential,
      documentIdPrefix:"Doc_"
    }
  )
  {
    rdxJobId
  }
}

Import API: Transfer files from S3 in createImportJob mutation

The createImportJob mutation now contains parameters to transfer files from S3.

  • Name: fileTransferLocation
  • Valid values: AmazonS3 or Windows
  • Required: No
  • Default: Windows

Note: The default is Windows. When selecting Windows, the files copy from the file repository designated for Images under the import\<case name> folder. When selecting AmazonS3, this mutation returns information needed to access the S3 bucket.

These Options parameters will allow you to request transfer of the following S3 return values within the fileTransferLocationInformation parameter:

  • accessKey
  • secretAccessKey
  • token
  • repositoryType
  • regionEndpoint
  • bucketName
  • rootPrefix
  • expiration

Note: When the fileTransferLocation is AmazonS3, the mutation copies the files from the Amazon S3 bucket and folder created for the job rather than from the import folder on the agent.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      fileTransferLocation:AmazonS3
    }
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "createImportJob": {
      "rdxJobId": 1040,
      "temporaryFileTransferLocationConnectInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: New importJobS3Refresh mutation to refresh S3 credentials

The new mutation called importJobS3Refresh allows you to refresh credentials for an S3 folder created as part of an import job. These credentials expire after 12 hours. However, it is possible that transfer of files will continue past this time frame.

The importJobS3Refresh mutation passes the caseId and rdxJobId that allows you to look up the folder information. The mutation also passes the original accessKey and the original secretAccessKey that validate and match the originally provided keys as an additional security measure.

The following describes the mutation and parameters:

  • importJobS3Refresh: Obtains new file transfer location information for an existing import job.
  • accessKey (parameter): Uses the accessKey value previously returned for this import job.
  • secretAccessKey (parameter): Uses the secretAccessKey value previously returned for this import job.

If there is no S3 information for the provided job ID, the application returns the following error: There is no information available for this rdxJobId. If the accessKey or secretAccessKey does not match, the application returns the following error: The keys provided do not match the keys for this rdxJobId.

The following is an example of how to use these parameters and the possible returned data.

Sample mutation:

mutation {
  importJobS3Refresh (
    caseId:26,
    rdxJobId:324,
    accessKey:"AEK_AccessKeyId_Old",
    secretAccessKey:"AEK_SecretAccessKey_Old"
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "importJobS3Refresh": {
      "rdxJobId": 1040,
      "fileTransferLocationInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: Modifications to parameter requirements in FieldParams

The following are changes to the type and onetomany field parameters. FieldParams no longer requires these parameters.

  • When not providing the type field parameter, the application will match on the field name only.
    • If no match is found, the application records the following error: The value for field <field name> for document <Document ID> was not imported. No such field exists, and no field type was provided to create a new field.
    • If a match is found on multiple existing fields, data will not import, and the application records the following error: The value for field <field name> for document <Document ID> was not imported. Multiple fields exist with the name provided, and no field type was provided.
  • When not providing the onetomany field parameter, if no match is found on the field name, the application creates a new field as one-to-many.

SaaS / Hosted Monthly Release Notes - December 2019 (10.1.005 - 10.1.008)

Analysis > Predictive Coding > Add custom Predictive Coding Templates

The Predictive Coding Templates page has been added to the Analysis capabilities in Nuix Discover and is available to all administrators. This page allows administrators to select the Standard or Standard + people template when setting up predictive coding or Continuous Active Learning (CAL) models, or to create their own templates.

Note: The Standard and Standard + people templates are available to all cases and cannot be modified.

Create a new Predictive Coding Template

To create a new template, go to the Case Home > Analysis > Predictive Coding Templates page and click Add. Add a name and description for the template, and then click Save. The Fields page opens for that template. To add fields to the template, select a field in the Add field list and click the + (plus sign) button.

Predictive Coding Templates Fields page Field selection

The following information applies to fields in a predictive coding template.

  • The values of date fields included in a template appear as text strings.
  • The weight for each field is 1 by default, but you can change the value to anything between 1 and 10. Weight reflects the amount of influence a field has on the model in relation to other fields in the template. For example, if you want People information to be more heavily considered in the model than other fields, adjust the weight value on the People fields to be higher than the other field weight values.
  • Predictive Coding Templates Fields page showing added field

The following information applies to all custom predictive coding templates.

  • Extracted text from documents is included in every template, although it is not listed as an item in the template. The training field for the model that the template is selected for is also included.
  • Once a template is being used by a CAL or predictive coding model, it cannot be edited. Open the template’s Properties page to view the names of the models that are using the template.
  • Predictive Coding Templates Properties page

Clone a Predictive Coding Template

All custom templates can be cloned, regardless of whether they are in use. To clone a template, open the Fields page for the template and click Clone template. Update the template name as needed and click Save. The Fields page for the new template opens. Add fields, delete fields, or change any of the field weights on that page.

Delete a Predictive Coding Template

You can delete any custom predictive coding template that is not in use by a predictive coding or CAL model. To delete a template, open the Fields page for the template and click Delete template.

Use Predictive Coding Templates with CAL

Administrators now have the option to select a predictive coding template when configuring training for a model. To select a template, go to the Case Home > Analysis > Populations and Samples page and select a population. Then, open the Predictive Coding page for the population and click Configure training. On the Settings page, select a template in the Predictive coding template list.

Configure training Settings page

Note: You can change the predictive model template throughout the lifecycle of the training model. However, at the present time, the application only provides data about the current template selected for training and does not record the history of different templates that have been selected.

Use Predictive Coding Templates with the Predictive Coding standard workflow

To select a predictive coding template to use when adding a predictive model, go to the Case Home > Analysis > Predictive Models page and click Add. In the Add Predictive Model dialog box, select a predictive coding template in the Predictive coding template list.

Add Predictive Model page

Portal Management > Processing > Jobs: Size of Elasticsearch index captured during Gather case metrics job

If a case uses an Elasticsearch index, the Gather case metrics job now captures the size of the Elasticsearch index. The Elasticsearch index is used to capture the coding audit history.

Portal Management > Reports: Elasticsearch index size available in the Hosted Details report

If a case uses an Elasticsearch index, you can view the size of the Elasticsearch index for a case on the Reports > Hosted Details page. The name of the new column is Elasticsearch index (GB). The Elasticsearch index is used to capture the coding audit history.

Connect API: New case statistic in the API {cases{statistics}} query

The Nuix Discover Connect API contains a new sizeOfElasticSearchIndex field that returns the total size of the Elasticsearch index for cases. The Elasticsearch index stores the audit history records for coding changes that are viewable within the Coding History pane.

The following example uses the new sizeOfElasticSearchIndex field in the cases {statistics} object.

{
  cases {
    name
    statistics {
      sizeOfElasticSearchIndex
    }
  }
}

The sizeOfElasticSearchIndex field is also part of the aggregateTotalHostedSize statistic that returns the sum of sizeofBaseDocumentsHostedDetails, sizeofRenditionsHostedDetails, aggregateDatabases, sizeOfElasticsearchIndex, dtIndexSize, sizeOfNonDocumentData, and sizeOfOrphanFiles.

SaaS / Hosted Monthly Release Notes - November 2019 (10.1.001 - 10.1.004)

Portal Management > Reports: Change the time zone

You can now change the time zone for the data that appears on the Portal Management > Reports > Usage and Hosted Details pages from local time to Coordinated Universal Time (UTC). Using UTC time allows the reports to display data consistently with reports that are generated through the API when querying for specific dates or date ranges. By default, the data appears in local time.

Use the following procedure to change the time zone from local time to UTC.

  1. On the Portal Management > Reports > Usage or Hosted Details page, on the toolbar, click the Time zone button.
  2. In the Time zone dialog box, shown in the following figure, select UTC time.
  3. Time Zone dialog box
  4. Click OK.
  5. The data displayed is then based on UTC time.

Portal Management > Reports: Subtotal column added to Hosted Details report

The Portal Management > Reports > Hosted Details page now includes a Subtotal (GB) column.

Note: The label for the Total size (GB) changed to Total (GB).

In the Subtotal (GB) column, you can view a subtotal of the active data, which includes the data in the following columns:

  • Base documents (GB)
  • Production renditions (GB)
  • Databases (GB)
  • Content index (GB)
  • Predict (GB)
  • Orphan (GB)

Portal Management > Settings > Log Options: Download a telemetry log file

The Portal Management > Settings > Log Options page includes a new button on the toolbar named Download log that you can use to download a telemetry log file. The application downloads the telemetry log data to a .log text file.

To keep the file size manageable, you can configure the number of records to maintain in the JSON string in the Telemetry archive configuration setting on the Portal Management > Settings > Log Options page. For example, as shown in the following figure, NRecentRecordsToReturn is set to 10000.

Telemetr archive configuration setting

SaaS / Hosted Monthly Release Notes - October 2019 (10.0.009 - 10.1.000)

Audio: Resubmit multiple previously transcribed documents

You can now resubmit audio documents to generate new transcriptions using the Transcribe audio option on the Tools menu. Doing so can be useful if you selected the wrong language model when you transcribed audio documents, or if errors occurred during the transcription job.

Before you resubmit previously transcribed documents, note the following:

  • After you resubmit the audio documents, the application removes any corrections that were made in the previous transcriptions.
  • You cannot resubmit documents that have annotations. Delete the annotations first.

Use the following procedure to resubmit previously transcribed audio documents.

  1. On the Tools menu, select Transcribe audio.
  2. In the Transcribe audio dialog box, shown in the following figure, do the following:
  3. Transcribe audio confirmation message
    • Under Language model, select the language. You can select one of the following audio language models:
      • Arabic (Modern Standard)
      • Brazilian Portuguese
      • Chinese (Mandarin)
      • English (UK)
      • English (US)
      • French
      • German
      • Japanese
      • Korean
      • Spanish
    • Under Optional inclusions, select the check boxes for the documents that you would like to resubmit.
  4. Click OK.

Tools > OCR processing: Languages listed in alphabetical order in the OCR processing dialog box

In the OCR processing dialog box, available languages for OCR processing now appear in alphabetical order.

Ingestions: Show level settings in Add ingestion dialog box

In the Add ingestion dialog box, a read-only display of the default level settings for the case now appears under the Family deduplication setting.

For example, select the default settings for levels, as shown in the following figure.

Default settings Levels page

These levels appear in the Add ingestion dialog box under the Levels heading, as shown in the following figure.

Add ingestion dialog box

Exports: Updates to the MDB Classic export type

Two updates have been made to the MDB Classic export type in the Export window.

  • Administrators can export a production or a set of rendition documents. In previous releases, administrators could export only binders or base documents with this export type.
    • When creating an export from the Manage Documents page, administrators can select the MDB Classic export type.
    • When selecting rendition documents from search results for export using the Tools > Export menu option, administrators can select the MDB Classic export type from the Export type list.
  • Administrators can choose to populate the pages table of an MDB export file even if no files are selected for export.
    • If an administrator selects the option to export an MDB load file in the Export window but does not select any files to export, the pages table of the exported MDB file will be empty by default. However, administrators can now populate the pages table of the MDB file anyway. On the Load files page, in the Settings window (available when you click the Settings button, or gear), select the Populate the pages table of the MDB even if no files are selected for export check box.
    • Export Renditions Load files page Settings options