About this tool

This experimental tool helps perform a systematic review of a technique in digital forensics and to create or update content for inclusion in the SOLVE-IT knowledge base. It consists of working through four stages (TRWM) to populate content:

  1. Technique — Document the technique
  2. Results — Identify Digital Forensic Technique Results (DFTRs)
  3. Weaknesses — Systematically consider weaknesses using ASTM E3016-18 error classifications
  4. Mitigations — Propose mitigations for identified weaknesses

Sessions
Settings
How saving works: Your work is automatically saved to your browser's local storage every time you make a change — you don't need to do anything. This means if you close the tab or refresh the page, your session will still be there when you come back.

Auto-backup to file is an optional extra safety net. When enabled, it will periodically download a JSON backup file to your computer's Downloads folder. This protects against browser data being cleared, or if you need to move your work to a different computer. Note: your browser may show a download prompt or save bar each time a backup is created.

You can also manually save at any time using the Save progress to file button in the header.
This can be used to either create a new techqniue or to load an existing techqniue into this interface for refinement (See 'Load existing techqniue' box.)

Further guidance is provided alongside each field below.
Enter a technique ID (e.g. DFT-1002) or a GitHub URL to load existing data for amendment
Use format DFT-XXXX for existing techniques, or leave as placeholder for new ones
If this is a subtechnique, search for and select the parent technique. Leave blank if this is a top-level technique.
Short, descriptive name for the existing or new technique.
A single-sentence definition of the technique, backed by literature where possible (max ~25 words, active voice)
0 words
Many techniques in digital forensics are referred to by multiple names. Adding synonyms helps SOLVE-IT users find the relevant technique. Press Enter or comma to add.
Additional context beyond the technique name and short description
Tools or implementations that exemplify this technique (press Enter or comma to add)
What type of data does this technique take as input? Search CASE/UCO and SOLVE-IT ontology classes.
Enter citation text (Harvard or BibTeX) — existing references will be matched automatically. For large works, consider noting the page or chapter number.
References should have meaningful implications for explaining or using the technique, not just be related to the topic. Add a relevance summary (max 280 chars) explaining why each reference matters.
DFTRs are the outputs that this technique produces. They will be used in the next stage to help enumerate potential weaknesses. DFTRs can be identified in several ways:
  • If the technique involves extracting artefacts (e.g. photos app examination), experiment with one or more examples and note the data points relevant to a forensic investigation — e.g. photo content, time taken, album names.
  • Look at forensic tools that already process this type of data and record the artefact types they extract — e.g. latitude and longitude recorded in a photo.
You can also map the outputs to ontology classes from UCO/CASE/SOLVE-IT. You may find the FOCAL webapp helpful to explore classes across all the ontologies in one interface. If classes do not exist you can suggest them, and this can be discussed during review.
As a reminder, when enumerating outputs the technique was named and described as:
Digital Forensic Technique Results
Only results with a name will be used in subsequent stages. If you have multiple output classes, consider whether they represent different DFTRs — though in some cases a single DFTR may have multiple class types. If no suitable class exists in the UCO/CASE/SOLVE-IT ontologies, type free text and press Enter to add a suggested class.
DFTRs defined: 0
Many different types of inputs or outputs detected. Consider whether this technique needs to be split into subtechniques.
Results Notes
Here you can provide additional notes as to how the DFTRs were determined.
The DFTRs from the previous stage are shown below, mapped against potential error classes. There are different versions of this mapping possible (e.g. against the desirable digital evidence properties of authentic, accuracy, complete), but for improved granularity this version uses the ASTM E3016-18 error classifications.

Consider each result in turn, using the error classes as prompts to think about what an error of that type might look like for a particular DFTR. Describe the nature of each weakness in the spaces provided. Example prompts for each error class are included in each section to help.
  • For each DFTR, a prompt for each error classification is provided to help you document potential weaknesses.
  • The error class checkboxes are pre-ticked for the relevant category, but you can tick additional classes to indicate a weakness could result in multiple types of error.
  • All weakness types have equal status in the end result, regardless of which prompt initially prompted you to think about them — the classifications are used only to enumerate weaknesses systematically.
Total weaknesses: 0
This presents a summary of the weaknesses identified and their error classes.
  • Review each weakness name to ensure it “stands alone” — e.g. “message not recovered” may be better reworded as “chat message not recovered” to preserve context.
  • You can also add references at this point to support the existence of weaknesses.
Weakness prompt data has changed since the last aggregation. Click "Re-aggregate" to update.
Weaknesses
Ensure weakness names stand alone without context. You can edit names and add references.
This phase is to consider mitigations for the weaknesses that have been identified. A systematic method of mitigation enumeration does not yet exist, but broadly speaking some categories are:
  • Checking results manually or with another tool
  • Testing
  • Using an alternative approach
  • Using a complementary approach
  • Checking for supporting or contradictory information
The weaknesses have been auto-populated below with their error classes summarised as a reminder.

As you type, existing mitigations from the SOLVE-IT knowledge base will be matched and can be linked. Free text that does not match an existing mitigation will propose a new one — IDs for new mitigations will be assigned later during the review process.
This presents a deduplicated summary of the mitigations proposed. Review to identify if individual mitigations could be merged, and add references for each mitigation where appropriate. You can also link mitigations to existing techniques — the search box will fetch from current SOLVE-IT data.
Mitigations
You can refine names, add references, and see which weaknesses each mitigation addresses.
This section provides a summary of the technique and associated weaknesses and mitigations. Once you are happy, you can download the JSON bundle which can then be copied into the TRWM submission section of the SOLVE-IT issue tracker.
Compact Summary
Export
For submission to the SOLVE-IT issue tracker.
Opens a pre-filled GitHub issue.
For local backup and later refinement.