I found that one of the zoo models was EfficientVIT B1 for classification and it is mentioned on that page that it will work on RVC2. I am having trouble deploying this model on RVC2. First it complained about hard sigmoid when I converted to RVC2 archive and then when I replaced that with reLU the python kernel started crashing. I have successfully deployed other models, but I want something more accurate like EffecientVIT B1. The config I am using is :


{
        "config_version": CONFIG_VERSION,
        "model": {
            "metadata": {
                "name": "RVC2 EfficientViT-B1 Attribute Classifier",
                "path": "rvc2_efficientvit_b1_attributes_rvc2_compatible.onnx",  # Relative path in archive
                "precision": DataType.FLOAT32
            },
            "inputs": [
                {
                    "name": "input",  # Standard input name for EfficientViT-B1
                    "dtype": DataType.FLOAT32,
                    "input_type": InputType.IMAGE,
                    "shape": [1, 3, 224, 224],  # Fixed batch size for RVC2 compatibility
                    "layout": "NCHW",
                    "preprocessing": {
                        # ImageNet normalization (mean and std * 255 for preprocessing)
                        "mean": [123.675, 116.28, 103.53],  # ImageNet mean * 255
                        "scale": [58.395, 57.12, 57.375],
                        "dai_type": "RGB888p"
                    }
                }
            ],
            "outputs": [
                {
                    "name": "output",  # Standard output name
                    "dtype": DataType.FLOAT32,
                    "shape": [1, 5],  # 5 attributes output
                }
            ],
            "heads": [
                {
                    "parser": "ClassificationParser",  # Multi-label classification parser
                    "metadata": {
                        "postprocessor_path": None,
                        "classes": [
                             # index 0,
                             # index 1,  
                           # index 2,
                           # index 3,
                            # index 4
                        ],
                        "n_classes": 5,
                        "input_width": 224,
                        "input_height": 224,

                    },
                    "outputs": ["output"]
                }
            ]
        }
    }

I got the EfficientVIT B1 weights from timm library. It had hard swish activation which was giving error when converted to ONNX hard sigmoid.

Hi @ShivamSharma ,

Thanks for trying out our model ZOO. Our EfficientVIT-B1 from the ZOO is already converted and can be used out-of-the-box. Since you are trying to use public weights for EfficientVIT B1, I suggest you first try out our already compiled model.

You can check the documentation for inference. But you can also try it out with our oak-examples repository:

  1. clone the repo
  2. move to the neural-networks/generic-example
  3. run python main.py -m luxonis/efficientvit:b1-224x224

To answer the question about the conversion error: EfficientVIT is not directly exportable to RVC2 and needs some small adjustements in the ONNX file. In particular, we modified the Div node because it had the wrong rank.

Let me know if that works for you.

Best, Jasa

    jasakerec

    I was able to run the compiled ZOO model on RVC2 but I want to train on my custom dataset. Right now, I am trying to use the ONNX ZOO model for EfficientVIT as backbone to train on my custom dataset. I currently have some errors when generating the archive from this newly trained model.

    Is this the correct way to use the weights and archives in the Luxonis models for custom dataset?

      ShivamSharma

      ONNX file is optimized for inference and is not suitable for training. For training its best to use some other framework, either PyTorch or our training library luxonis-train. For you case you can try our library since we have EfficientVIT backbone directly supported and once the training is complete the model is automatically converted to RVC2.

      How are you currently trying to train the model?

      Let me know if you need any help with luxonis-train?

      Best, Jasa

        jasakerec I'm converting an ONNX model to PyTorch to add a classification layer, then exporting it back to ONNX, but I'm running into issues.

        Thanks! I’ll check the Luxonis-train docs and try training. I suspect the main problem will be dataset formatting. Is there a standard dataset format Luxonis-train expects for classification? I can try exporting that format from CVAT.

          ShivamSharma Yes, does support different dataset formats (see here). You can export the data in any of the supported formats. For classification there is also available format: A directory with subdirectories for each class.

          Let me know if you run into any problems.

          Best, Jasa

          dataset_dir/
          ├── train/
          │   ├── class1/
          │   │   ├── img1.jpg
          │   │   ├── img2.jpg
          │   │   └── ...
          │   ├── class2/
          │   └── ...
          ├── valid/
          └── test/

            jasakerec

            I used Luxonis-train to train pre-defined EffecientVIT on my custom dataset. Instead of converting my dataset I created a custom data loader and loss function. I was able to train it but when I convert it to RVC2 archive it gives an error for Hard sigmoid activation. I have pasted the error in the end. Please let me know how to fix this.

            
            
            Traceback (most recent call last):
              File "/tmp/ipykernel_2994566/4233414881.py", line 34, in <module>
                converted_model = convert.RVC2(
              File "/home/ssharm21/.local/share/mamba/envs/conversion_env/lib/python3.10/site-packages/modelconverter/hub/convert.py", line 54, in RVC2
                return cli_convert(
              File "/home/ssharm21/.local/share/mamba/envs/conversion_env/lib/python3.10/site-packages/modelconverter/hub/__main__.py", line 980, in convert
                wait_for_export(instance["dag_run_id"])
              File "/home/ssharm21/.local/share/mamba/envs/conversion_env/lib/python3.10/site-packages/modelconverter/cli/utils.py", line 405, in wait_for_export
                raise RuntimeError(f"Export failed with\n{logs}.")
            RuntimeError: Export failed with
            [2025-07-17 23:10:14.024407+00:00] {pod.py:1132} INFO - Building pod model-export-1cfns9qi with labels: {'dag_id': 'export_easyml', 'task_id': 'export_model', 'run_id': 'manual__2025-07-17T230950.0128960000-74dfd2937', 'kubernetes_pod_operator': 'True', 'try_number': '1'}
            WARNING  /usr/local/lib/python3.10/site-packages/albumentations/__ logging.py:86
            Albumentations is available: 2.0.8 (you have 1.4.22).                  
            Upgrade using: pip install -U albumentations. To disable               
            automatic update checks, set the environment variable                  
            NO_ALBUMENTATIONS_UPDATE to 1.                                         
            check_for_updates()                                                  
            `register_module` is deprecated, use `register` instead.               
            PUT_FILE_REGISTRY.register_module(module=plugin_class)               
            INFO     Simplifying ONNX.                                   base_exporter.py:90
            INFO     ONNX successfully simplified.                      base_exporter.py:107
            INFO     Saving simplified ONNX to                          base_exporter.py:111
            shared_with_container/outputs/ClassificationModel_                     
            to_rvc2_2025_07_17_23_13_42/intermediate_outputs/C                     
            lassificationModel-simplified.onnx                                     
            INFO     Executing `mo --output_dir                             subprocess.py:37
            shared_with_container/outputs/ClassificationModel_to_r                 
            vc2_2025_07_17_23_13_42/intermediate_outputs --output                  
            /classification/ClassificationHead/classification/0                    
            --compress_to_fp16 --input image[1 3 256 256]{f32}                     
            --mean_values image[123.675, 116.28, 103.53]                           
            --scale_values image[58.395, 57.12, 57.375]                            
            --reverse_input_channels --input_model                                 
            shared_with_container/outputs/ClassificationModel_to_r                 
            vc2_2025_07_17_23_13_42/intermediate_outputs/Classific                 
            ationModel-simplified.onnx`                                            
            INFO     Command `mo` finished in 3.12 seconds with return code subprocess.py:55
            0.                                                                     
            INFO     [ STDOUT ]:                                            subprocess.py:64
            Check for a new version of Intel(R) Distribution of                    
            OpenVINO(TM) toolkit here                                              
            https://software.intel.com/content/www/us/en/develop/t                 
            ools/openvino-toolkit/download.html?cid=other&source=p                 
            rod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg                 
            _all&medium=organic or on                                              
            https://github.com/openvinotoolkit/openvino                            
            [ INFO ] The model was converted to IR v11, the latest                 
            model format that corresponds to the source DL                         
            framework input/output format. While IR v11 is                         
            backwards compatible with OpenVINO Inference Engine                    
            API v1.0, please use API v2.0 (as of 2022.1) to take                   
            advantage of the latest improvements in IR v11.                        
            Find more information about API v2.0 and IR v11 at                     
            https://docs.openvino.ai/latest/openvino_2_0_transitio                 
            n_guide.html                                                           
            [ SUCCESS ] Generated IR version 11 model.                             
            [ SUCCESS ] XML file:                                                  
            /app/shared_with_container/outputs/ClassificationModel                 
            _to_rvc2_2025_07_17_23_13_42/intermediate_outputs/Clas                 
            sificationModel-simplified.xml                                         
            [ SUCCESS ] BIN file:                                                  
            /app/shared_with_container/outputs/ClassificationModel                 
            _to_rvc2_2025_07_17_23_13_42/intermediate_outputs/Clas                 
            sificationModel-simplified.bin                                         
            INFO     OpenVINO IR exported to                                 exporter.py:143
            shared_with_container/outputs/ClassificationModel_to_rv                
            c2_2025_07_17_23_13_42                                                 
            INFO     Executing `compile_tool -d MYRIAD -ip U8 -m            subprocess.py:37
            shared_with_container/outputs/ClassificationModel_to_r                 
            vc2_2025_07_17_23_13_42/intermediate_outputs/Classific                 
            ationModel-simplified.xml -o                                           
            shared_with_container/outputs/ClassificationModel_to_r                 
            vc2_2025_07_17_23_13_42/intermediate_outputs/blobs/Cla                 
            ssificationModel_8shave.blob -c /tmp/tmpg0t6lnon.conf`                 
            ERROR    Command `compile_tool` finished in 0.34 seconds with   subprocess.py:55
            return code 1.                                                         
            ERROR    [ STDERR ]:                                            subprocess.py:59
            [ GENERAL_ERROR ]                                                      
            /mnt/docker/openvino/src/plugins/intel_myriad/graph_tr                 
            ansformer/src/frontend/frontend.cpp:596 Failed to                      
            compile layer                                                          
            "/EfficientViT/feature_extractor.0/feature_extractor.0                 
            .2/HardSigmoid": unsupported layer type "HardSigmoid"                  
            ERROR    [ STDOUT ]:                                            subprocess.py:64
            OpenVINO Runtime version ......... 2022.3.0                            
            Build ...........                                                      
            2022.3.0-9213-bdadcd7583c-releases/2022/3                              
            Network inputs:                                                        
            image : u8 / [...]                                                 
            Network outputs:                                                       
            /classification/ClassificationHead/classification/                 
            0/sink_port_0 : f16 / [...]                                            
            ERROR    Encountered an exception in the conversion process!     __main__.py:348
            Traceback (most recent call last):                                     
            File "/usr/local/bin/modelconverter", line 8, in                     
            <module>                                                               
            sys.exit(app())                                                    
            |   |    -> <typer.main.Typer object at                            
            0x7e6930d04040>                                                        
            |   -> <built-in function exit>                                    
            -> <module 'sys' (built-in)>                                       
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/typer/main.py"                
            , line 323, in __call__                                                
            return get_command(self)(*args, **kwargs)                          
            |           |      |       -> {}                            
            |           |      -> ()                                    
            |           -> <typer.main.Typer object at                  
            0x7e6930d04040>                                                        
            -> <function get_command at 0x7e6932111480>                 
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/click/core.py"                
            , line 1161, in __call__                                               
            return self.main(*args, **kwargs)                                  
            |    |     |       -> {}                                    
            |    |     -> ()                                            
            |    -> <function TyperGroup.main at                        
            0x7e6932110f70>                                                        
            -> <TyperGroup >                                            
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/typer/core.py"                
            , line 743, in main                                                    
            return _main(                                                      
            -> <function _main at 0x7e6932110310>                       
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/typer/core.py"                
            , line 198, in _main                                                   
            rv = self.invoke(ctx)                                              
            |    |      -> <click.core.Context object at                  
            0x7e697b3cc160>                                                        
            |    -> <function MultiCommand.invoke at                      
            0x7e69321c6290>                                                        
            -> <TyperGroup >                                              
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/click/core.py"                
            , line 1697, in invoke                                                 
            return                                                             
            _process_result(sub_ctx.command.invoke(sub_ctx))                       
            |               |       |       |      ->                   
            <click.core.Context object at 0x7e692972dff0>                          
            |               |       |       -> <function                
            Command.invoke at 0x7e69321c5d80>                                      
            |               |       -> <TyperCommand                    
            convert>                                                               
            |               -> <click.core.Context                      
            object at 0x7e692972dff0>                                              
            -> <function                                                
            MultiCommand.invoke.<locals>._process_result at                        
            0x7e692971f2e0>                                                        
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/click/core.py"                
            , line 1443, in invoke                                                 
            return ctx.invoke(self.callback, **ctx.params)                     
            |   |      |    |           |   -> {'path':                 
            'gs://mlcloud-services-prod-bucket/0197c68d-8b5a-7850-a                
            d69-e817e290ba41/modelVersions/f828f0b9-00e6-4b1e-94f3-                
            ad8d82...                                                              
            |   |      |    |           ->                              
            <click.core.Context object at 0x7e692972dff0>                          
            |   |      |    -> <function convert at                     
            0x7e692971db40>                                                        
            |   |      -> <TyperCommand convert>                        
            |   -> <function Context.invoke at                          
            0x7e69321c4af0>                                                        
            -> <click.core.Context object at                            
            0x7e692972dff0>                                                        
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/click/core.py"                
            , line 788, in invoke                                                  
            return __callback(*args, **kwargs)                                 
            |       -> {'path':                             
            'gs://mlcloud-services-prod-bucket/0197c68d-8b5a-7850-a                
            d69-e817e290ba41/modelVersions/f828f0b9-00e6-4b1e-94f3-                
            ad8d82...                                                              
            -> ()                                           
            File                                                                 
            "/usr/local/lib/python3.10/site-packages/typer/main.py"                
            , line 698, in wrapper                                                 
            return callback(**use_params)                                      
            |          -> {'target': <Target.RVC2:                      
            'rvc2'>, 'path':                                                       
            'gs://mlcloud-services-prod-bucket/0197c68d-8b5a-7850-a                
            d69-e817e290ba41/modelVersio...                                        
            -> <function convert at 0x7e692971d510>                     
            > File "/app/modelconverter/__main__.py", line 291, in                 
            convert                                                                
            out_models = exporter.run()                                        
            |        -> <function Exporter.run at                 
            0x7e692971c790>                                                        
            ->                                                    
            <modelconverter.packages.rvc2.exporter.RVC2Exporter                    
            object at 0x7e697a4fcdc0>                                              
            File "/app/modelconverter/packages/base_exporter.py",                
            line 124, in run                                                       
            output_path = self.export()                                        
            |    -> <function RVC2Exporter.export                
            at 0x7e69285c5cf0>                                                     
            ->                                                   
            <modelconverter.packages.rvc2.exporter.RVC2Exporter                    
            object at 0x7e697a4fcdc0>                                              
            File "/app/modelconverter/packages/rvc2/exporter.py",                
            line 216, in export                                                    
            return self.compile_superblob(args)                                
            |    |                 -> ['-d', 'MYRIAD',                  
            '-ip', 'U8', '-m',                                                     
            PosixPath('shared_with_container/outputs/Classification                
            Model_to_rvc2_2025_07_17_23_13_42/...                                  
            |    -> <function                                           
            RVC2Exporter.compile_superblob at 0x7e69285c5e10>                      
            ->                                                          
            <modelconverter.packages.rvc2.exporter.RVC2Exporter                    
            object at 0x7e697a4fcdc0>                                              
            File "/app/modelconverter/packages/rvc2/exporter.py",                
            line 249, in compile_superblob                                         
            default_blob_path = self.compile_blob(                             
            |    -> <function                              
            RVC2Exporter.compile_blob at 0x7e69285c5d80>                           
            ->                                             
            <modelconverter.packages.rvc2.exporter.RVC2Exporter                    
            object at 0x7e697a4fcdc0>                                              
            File "/app/modelconverter/packages/rvc2/exporter.py",                
            line 239, in compile_blob                                              
            self._subprocess_run(["compile_tool", *args],                      
            meta_name="compile_tool")                                              
            |    |                                 -> ['-d',                   
            'MYRIAD', '-ip', 'U8', '-m',                                           
            PosixPath('shared_with_container/outputs/Classification                
            Model_to_rvc2_2025_07_17_23_13_42/...                                  
            |    -> <function Exporter._subprocess_run at                      
            0x7e692971ca60>                                                        
            ->                                                                 
            <modelconverter.packages.rvc2.exporter.RVC2Exporter                    
            object at 0x7e697a4fcdc0>                                              
            File "/app/modelconverter/packages/base_exporter.py",                
            line 196, in _subprocess_run                                           
            subprocess_run(args, **kwargs)                                     
            |              |       -> {}                                       
            |              -> ['compile_tool', '-d', 'MYRIAD',                 
            '-ip', 'U8', '-m',                                                     
            PosixPath('shared_with_container/outputs/Classification                
            Model_to_rvc2_2025...                                                  
            -> <function subprocess_run at 0x7e6931037910>                     
            File "/app/modelconverter/utils/subprocess.py", line                 
            67, in subprocess_run                                                  
            raise SubprocessException(info_string)                             
            |                   -> 'Command                              
            `compile_tool` finished in 0.34 seconds with return                    
            code 1.\n[ STDERR ]:\n[ GENERAL_ERROR ]                                
            \n/mnt/docker/openvino/s...                                            
            -> <class                                                    
            'modelconverter.utils.exceptions.SubprocessException'>                 
            modelconverter.utils.exceptions.SubprocessException:                   
            Command `compile_tool` finished in 0.34 seconds with                   
            return code 1.                                                         
            [ STDERR ]:                                                            
            [ GENERAL_ERROR ]                                                      
            /mnt/docker/openvino/src/plugins/intel_myriad/graph_tra                
            nsformer/src/frontend/frontend.cpp:596 Failed to                       
            compile layer                                                          
            "/EfficientViT/feature_extractor.0/feature_extractor.0.                
            2/HardSigmoid": unsupported layer type "HardSigmoid"                   
            [ STDOUT ]:                                                            
            OpenVINO Runtime version ......... 2022.3.0                            
            Build ...........                                                      
            2022.3.0-9213-bdadcd7583c-releases/2022/3                              
            Network inputs:                                                        
            image : u8 / [...]                                                 
            Network outputs:                                                       
            /classification/ClassificationHead/classification/0                
            /sink_port_0 : f16 / [...]                                             
            ╭───────── Traceback (most recent call last) ─────────╮                
            │ /app/modelconverter/__main__.py:291 in convert      │                
            │                                                     │                
            │   288 │   │   │   │   │   output_dir=output_path,   │                
            │   289 │   │   │   │   )                             │                
            │   290 │   │   │                                     │                
            │ ❱ 291 │   │   │   out_models = exporter.run()       │                
            │   292 │   │   │   if not isinstance(out_models, lis │                
            │   293 │   │   │   │   out_models = [out_models]     │                
            │   294 │   │   │   if to == Format.NN_ARCHIVE:       │                
            │                                                     │                
            │ /app/modelconverter/packages/base_exporter.py:124   │                
            │ in run                                              │                
            │                                                     │                
            │   121 │   │   pass                                  │                
            │   122 │                                             │                
            │   123 │   def run(self) -> Path:                    │                
            │ ❱ 124 │   │   output_path = self.export()           │                
            │   125 │   │   new_output_path = (                   │                
            │   126 │   │   │   self.output_dir                   │                
            │   127 │   │   │   / Path(self.model_name).with_suff │                
            │                                                     │                
            │ /app/modelconverter/packages/rvc2/exporter.py:216   │                
            │ in export                                           │                
            │                                                     │                
            │   213 │   │   args += ["-m", xml_path]              │                
            │   214 │   │                                         │                
            │   215 │   │   if self.superblob:                    │                
            │ ❱ 216 │   │   │   return self.compile_superblob(arg │                
            │   217 │   │                                         │                
            │   218 │   │   return self.compile_blob(args)        │                
            │   219                                               │                
            │                                                     │                
            │ /app/modelconverter/packages/rvc2/exporter.py:249   │                
            │ in compile_superblob                                │                
            │                                                     │                
            │   246 │   │                                         │                
            │   247 │   │   orig_args = args.copy()               │                
            │   248 │   │                                         │                
            │ ❱ 249 │   │   default_blob_path = self.compile_blob │                
            │   250 │   │   │   orig_args                         │                
            │   251 │   │   │   + [                               │                
            │   252 │   │   │   │   "-o",                         │                
            │                                                     │                
            │ /app/modelconverter/packages/rvc2/exporter.py:239   │                
            │ in compile_blob                                     │                
            │                                                     │                
            │   236 │   │   │   │   ),                            │                
            │   237 │   │   │   ]                                 │                
            │   238 │   │                                         │                
            │ ❱ 239 │   │   self._subprocess_run(["compile_tool", │                
            │   240 │   │   logger.info(f"Blob compiled to {blob_ │                
            │   241 │   │   return blob_output_path               │                
            │   242                                               │                
            │                                                     │                
            │ /app/modelconverter/packages/base_exporter.py:196   │                
            │ in _subprocess_run                                  │                
            │                                                     │                
            │   193 │   │   │   args.extend(new_args)             │                
            │   194 │                                             │                
            │   195 │   def _subprocess_run(self, args: List[str] │                
            │ ❱ 196 │   │   subprocess_run(args, **kwargs)        │                
            │   197 │   │   self._cmd_info[meta_name] = [str(arg) │                
            │   198                                               │                
            │                                                     │                
            │ /app/modelconverter/utils/subprocess.py:67 in       │                
            │ subprocess_run                                      │                
            │                                                     │                
            │   64 │   │   │   log_message(f"[ STDOUT ]:\n{string │                
            │   65 │   │   info_string += f"\n[ STDOUT ]:\n{strin │                
            │   66 │   if result.returncode != 0:                 │                
            │ ❱ 67 │   │   raise SubprocessException(info_string) │                
            │   68 │                                              │                
            │   69 │   return result                              │                
            │   70                                                │                
            ╰─────────────────────────────────────────────────────╯                
            SubprocessException: Command `compile_tool` finished in                
            0.34 seconds with return code 1.                                       
            [ STDERR ]:                                                            
            [ GENERAL_ERROR ]                                                      
            /mnt/docker/openvino/src/plugins/intel_myriad/graph_tra                
            nsformer/src/frontend/frontend.cpp:596 Failed to                       
            compile layer                                                          
            "/EfficientViT/feature_extractor.0/feature_extractor.0.                
            2/HardSigmoid": unsupported layer type "HardSigmoid"                   
            [ STDOUT ]:                                                            
            OpenVINO Runtime version ......... 2022.3.0                            
            Build ...........                                                      
            2022.3.0-9213-bdadcd7583c-releases/2022/3                              
            Network inputs:                                                        
            image : u8 / [...]                                                 
            Network outputs:                                                       
            /classification/ClassificationHead/classification/0                
            /sink_port_0 : f16 / [...]                                             
            [2025-07-17 23:14:15.943332+00:00] {pod.py:984} INFO - Skipping deleting pod: model-export-1cfns9qi
            .

              ShivamSharma Thanks for giving luxonis-train a try. The problem is incorrect opset_version used in onnx export. You can check the opset_version by:

              >>> import onnx
              >>> model = onnx.load("classification_light.onnx")
              >>> opset_version = model.opset_import[0].version
              >>> print(opset_version)

              In the config.yaml you can set the opset_version to 16 like:

              exporter:
                onnx:
                  opset_version: 16

              and that should convert to RVC2 with no problems.

              Let me know if it works.

              Best, Jasa

                jasakerec

                thanks for the reply. I used Opset 16 and I don't get this error but when I deploy the model I get the following error:

                ---------------------------------------------------------------------------
                ValueError                                Traceback (most recent call last)
                Cell In[4], line 264
                    261 script_node.outputs["manip_img"].link(crop_node.inputImage)
                    263 # === CLASSIFICATION MODEL ===
                --> 264 classification_nn = pipeline.create(ParsingNeuralNetwork).build(crop_node.out, classification_archive)
                    265 classification_nn.input.setBlocking(False)
                    266 classification_nn.input.setMaxSize(1)
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parsing_neural_network.py:278, in ParsingNeuralNetwork.build(self, input, nn_source, fps)
                    275 kwargs = {"fps": fps} if fps else {}
                    276 self._nn.build(input, self._nn_archive, **kwargs)
                --> 278 self._updateParsers(self._nn_archive)
                    280 if len(self._parsers) > 1:
                    281     self._createSyncNode()
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parsing_neural_network.py:311, in ParsingNeuralNetwork._updateParsers(self, nnArchive)
                    309 def _updateParsers(self, nnArchive: dai.NNArchive) -> None:
                    310     self._removeOldParserNodes()
                --> 311     self._parsers = self._getParserNodes(nnArchive)
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parsing_neural_network.py:321, in ParsingNeuralNetwork._getParserNodes(self, nnArchive)
                    319 def _getParserNodes(self, nnArchive: dai.NNArchive) -> Dict[int, BaseParser]:
                    320     parser_generator = self._pipeline.create(ParserGenerator)
                --> 321     parsers = self._generateParsers(parser_generator, nnArchive)
                    322     for parser in parsers.values():
                    323         self._nn.out.link(
                    324             parser.input
                    325         )  # TODO: once NN node has output dictionary, link to the correct output
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parsing_neural_network.py:332, in ParsingNeuralNetwork._generateParsers(self, parserGenerator, nnArchive)
                    329 def _generateParsers(
                    330     self, parserGenerator: ParserGenerator, nnArchive: dai.NNArchive
                    331 ) -> Dict[int, BaseParser]:
                --> 332     return parserGenerator.build(nnArchive)
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parser_generator.py:78, in ParserGenerator.build(self, nn_archive, head_index, host_only)
                     74     for input in nn_archive.getConfig().model.inputs:
                     75         head_config["model_inputs"].append(
                     76             {"shape": input.shape, "layout": input.layout}
                     77         )
                ---> 78     parsers[index] = pipeline.create(parser).build(head_config)
                     80 return parsers
                
                File c:\Users\ssharm21\depthai-core\venv\lib\site-packages\depthai_nodes\node\parsers\classification.py:113, in ClassificationParser.build(self, head_config)
                    111 output_layers = head_config.get("outputs", [])
                    112 if len(output_layers) != 1:
                --> 113     raise ValueError(
                    114         f"Only one output layer supported for Classification, got {output_layers} layers."
                    115     )
                    116 self.output_layer_name = output_layers[0]
                    117 self.classes = head_config.get("classes", self.classes)
                
                ValueError: Only one output layer supported for Classification, got [] layers.

                My configuration is:

                config_version: 1.0.0
                exporter:
                  blobconverter:
                    active: True
                    shaves: 6
                    version: '2022.1'
                  data_type: "fp16"
                  input_shape: [1, 3, 256, 256]
                
                  name: null
                  onnx:
                    dynamic_axes: null
                    opset_version: 16
                  reverse_input_channels: true
                
                  upload_url: null
                
                loader:
                  image_source: image
                  name: CustomMultiLabelLoader
                  params:
                    test_csv_path: /home/ssharm21/Codes/yolov12/test/images/Test/cropped_test_annotations.csv
                    test_image_dir: /home/ssharm21/Codes/yolov12/test/images/Test/cropped_test_images
                    train_csv_path: /home/ssharm21/Codes/yolov12/training/images/Train/cropped_train_annotations.csv
                    train_image_dir: /home/ssharm21/Codes/yolov12/training/images/Train/cropped_train_images
                    val_csv_path: /home/ssharm21/Codes/yolov12/validation/images/Validation/cropped_val_annotations.csv
                    val_image_dir: /home/ssharm21/Codes/yolov12/validation/images/Validation/cropped_val_images
                  test_view: "test"
                  train_view: "train"
                  val_view: "val"
                
                model:
                  name: classification
                  losses:
                  - alias: null
                    attached_to: ClassificationHead
                    name: CustomMultiLabelLoss
                    reduction: mean
                    weight: 1.0
                
                  metrics: []
                
                  nodes:
                
                
                  - alias: null
                    freezing:
                      active: false
                      lr_after_unfreeze: null
                      unfreeze_after: null
                    inputs: []
                    name: EfficientViT
                    params:
                      variant: small
                    remove_on_export: false
                    task: backbone
                    task_name: backbone
                
                  - alias: null
                    freezing:
                      active: false
                      lr_after_unfreeze: null
                      unfreeze_after: null
                    inputs: [EfficientViT]
                    name: ClassificationHead
                    params:
                      n_classes: 5
                    remove_on_export: false
                    task: classification
                    task_name: /classification
                 
                  outputs: ["ClassificationHead"]
                  weights: null

                Also, I am getting another error now when converting to RVC2 blob:

                INFO     Converting ONNX to .blob                                                               export_utils.py:104
                Downloading output/80-peach-seahorse/export/classification_openvino_2022.1_6shave.blob...
                {
                    "exit_code": 1,
                    "message": "Command failed with exit code 1, command: /app/venvs/venv2022_1/bin/python /app/model_compiler/openvino_2022.1/converter.py --precisions FP16 --output_dir /tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7 --download_dir /tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7 --name classification --model_root /tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7",
                    "stderr": "[ ERROR ]  Numbers of inputs and mean/scale values do not match. \n For more information please refer to Model Optimizer FAQ, question #61. (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=61#question-61)\n",
                    "stdout": "========== Converting classification to IR (FP16)\nConversion command: /app/venvs/venv2022_1/bin/python -- /app/venvs/venv2022_1/bin/mo --framework=onnx --data_type=FP16 --output_dir=/tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7/classification/FP16 --model_name=classification --input= '--scale_values=[123.675, 116.28, 103.53]' '--mean_values=[58.395, 57.12, 57.375]' --reverse_input_channels --data_type=FP16 --input_model=/tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7/classification/FP16/classification.onnx\n\nModel Optimizer arguments:\nCommon parameters:\n\t- Path to the Input Model: \t/tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7/classification/FP16/classification.onnx\n\t- Path for generated IR: \t/tmp/blobconverter/f32e50e666af434c87c9cfa9857e59c7/classification/FP16\n\t- IR output name: \tclassification\n\t- Log level: \tERROR\n\t- Batch: \tNot specified, inherited from the model\n\t- Input layers: \tNot specified, inherited from the model\n\t- Output layers: \tNot specified, inherited from the model\n\t- Input shapes: \tNot specified, inherited from the model\n\t- Source layout: \tNot specified\n\t- Target layout: \tNot specified\n\t- Layout: \tNot specified\n\t- Mean values: \t[58.395, 57.12, 57.375]\n\t- Scale values: \t[123.675, 116.28, 103.53]\n\t- Scale factor: \tNot specified\n\t- Precision of IR: \tFP16\n\t- Enable fusing: \tTrue\n\t- User transformations: \tNot specified\n\t- Reverse input channels: \tTrue\n\t- Enable IR generation for fixed input shape: \tFalse\n\t- Use the transformations config file: \tNone\nAdvanced parameters:\n\t- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: \tFalse\n\t- Force the usage of new Frontend of Model Optimizer for model conversion into IR: \tFalse\nOpenVINO runtime found in: \t/opt/intel/openvino2022_1/python/python3.8/openvino\nOpenVINO runtime version: \t2022.1.0-7019-cdb9bec7210-releases/2022/1\nModel Optimizer version: \t2022.1.0-7019-cdb9bec7210-releases/2022/1\n[WARN] 2025-07-21T23:14:06z frontends/onnx/frontend/src/ops_bridge.cpp 240\tCurrently ONNX operator set version: 16 is unsupported. Falling back to: 15\nFAILED:\nclassification\n"
                }
                Error during model archiving/conversion: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2022.1&no_cache=True

                  ShivamSharma , great that error is gone. Regarding the new error: I can not really tell what is wrong, could you send me a exported NN archive? From the logs it looks like output layers of the model are not properly define. In the mean time you could try removing task and task_name from the EfficientViT node because it is intended only for heads. Next, you could also try to remove outputs: ["ClassificationHead"] , luxonis-train can automatically detect outputs.

                  Let me know if any of those suggestions worked and send me a exported NN archive.

                  Thanks, Jasa

                    jasakerec Here is the google drive link

                    I tested the RVC2 archive after removing the parameters you specified, but I got the same result. BTW, do you know why I received the second error in my previous reply?

                      ShivamSharma Hi, thanks for the zip file. In the logs (luxonis_train.log) I see that ONNX and blob export worked. Can you give me exatcly the commands and files you used to get the 2nd error? For the deployment error: Could you give me the python script that you used + NN archive that you used? Note that for deployment you should use rvc2 nn archive and not onnx nn archive. With the onnx nn archive that you sent me I was able to convert it through the Hub. I am attaching it bellow so you can use it during the inference.

                      Archive here

                      Best, Jasa

                        jasakerec I am sorry, it wasn't clear whether you wanted ONNX NN archive or RVC2 NN archive. The code and the config I used are Google drive link. I will update my reply when I deploy the file you gave me.

                        Edit 1: same error with your RVC2 archive:
                        Visualizer deployment failed: Only one output layer supported for Classification, got [] layers.
                        Edit 2:
                        @jasakerec Hi, can anyone at Luxonis help me with this?

                        5 days later

                        Hi @ShivamSharma ,
                        aplogies for a late reply. The underlying issue that causes Visualizer deployment failed: Only one output layer supported for Classification, got [] layers. is that the NNarchive (ONNX and subsequently also RVC2) has an empty list under heads[0]["outputs"] (if you look at the config.json in the NNArhive). We would expect this to be a list of output names for this head, so in your case I would expect to see something like:
                        "outputs": ["/classification/ClassificationHead/classification/0"]
                        If you want a quick fix that should do it if you edit directly the config.json in the RVC2 archive.

                        But for the wholistic fix: The reason why this was not correctly auto-populated is a bit deeper in the luxonis-train/luxonis-ml integration. I see that you have a custom loader. You correctly specify the get() function where you construct the labels dictionary. There you specify the task_type to be "classification" and since there is no custom task specified as well we automatically assume this is an empty string (""). Just for reference: If you would specify a custom task then the labels dict would look something like this: {"custom_task/classification": attrs}.
                        Now the get_classes() method represents a mapping between tasks and its classes. Since in your case the task is an empty string the correct way the get_classes() method would be:

                            @override
                            def get_classes(self) -> dict[str, dict[str, int]]:
                                # Return class mapping for 5 attributes using the correct format
                                return {
                                    "": {f"attribute_{i}": i for i in range(self.n_classes)} # note the empty string as task (key)
                                }

                        And also just for reference and completness: If you would instead go with the custom_task mentioned before then the get_classes() would actually have this: `return {"custom_task": {f"attribute_{i}":...}}

                        To summarize:

                        • For quick fix you can edit the config.json inside the NNArchive and manually add the missing value in the heads[0]["outputs"]
                        • For a complete fix you should correct the get_classes definition to use the empty string as key in the dictionary

                        Let me know if this makes sense and if you have any questions/issues. We are continuously trying to improve the documentation and this is one of the issues where we should make it more clear as it is quite hard to spot.
                        Best,
                        Klemen